Self-supervised Bidirectional Prompt Tuning for Entity-enhanced Pre-trained Language Model

被引:0
|
作者
Zou, Jiaxin [1 ]
Xu, Xianghong [1 ]
Hou, Jiawei [2 ]
Yang, Qiang [2 ]
Zheng, Hai-Tao [1 ,3 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen 518055, Peoples R China
[2] Weixin Grp, Dept Search & Applicat, Tencent, Peoples R China
[3] Pengcheng Lab, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/IJCNN54540.2023.10192045
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the promotion of the pre-training paradigm, researchers are increasingly focusing on injecting external knowledge, such as entities and triplets from knowledge graphs, into pre-trained language models (PTMs) to improve their understanding and logical reasoning abilities. This results in significant improvements in natural language understanding and generation tasks and some level of interpretability. In this paper, we propose a novel two-stage entity knowledge enhancement pipeline for Chinese pre-trained models based on "bidirectional" prompt tuning. The pipeline consists of a "forward" stage, in which we construct fine-grained entity type prompt templates to boost PTMs injected with entity knowledge, and a "backward" stage, where the trained templates are used to generate type-constrained context-dependent negative samples for contrastive learning. Experiments on six classification tasks in the Chinese Language Understanding Evaluation (CLUE) benchmark demonstrate that our approach significantly improves upon the baseline results in most datasets, particularly those that have a strong reliance on diverse and extensive knowledge.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Chemical-Protein Relation Extraction with Pre-trained Prompt Tuning
    He, Jianping
    Li, Fang
    Hu, Xinyue
    Li, Jianfu
    Nian, Yi
    Wang, Jingqi
    Xiang, Yang
    Wei, Qiang
    Xu, Hua
    Tao, Cui
    2022 IEEE 10TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2022), 2022, : 608 - 609
  • [42] PPT: Pre-trained Prompt Tuning for Few-shot Learning
    Gu, Yuxian
    Han, Xu
    Liu, Zhiyuan
    Huang, Minlie
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 8410 - 8423
  • [43] Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction
    Alt, Christoph
    Huebner, Marc
    Hennig, Leonhard
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 1388 - 1398
  • [44] SETEM: Self-ensemble training with Pre-trained Language Models for Entity Matching
    Ding, Huahua
    Dai, Chaofan
    Wu, Yahui
    Ma, Wubin
    Zhou, Haohao
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [45] ON THE USE OF SELF-SUPERVISED PRE-TRAINED ACOUSTIC AND LINGUISTIC FEATURES FOR CONTINUOUS SPEECH EMOTION RECOGNITION
    Macary, Manon
    Tahon, Marie
    Esteve, Yannick
    Rousseau, Anthony
    2021 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP (SLT), 2021, : 373 - 380
  • [46] Talent Supply and Demand Matching Based on Prompt Learning and the Pre-Trained Language Model
    Li, Kunping
    Liu, Jianhua
    Zhuang, Cunbo
    APPLIED SCIENCES-BASEL, 2025, 15 (05):
  • [47] Employing bimodal representations to predict DNA bendability within a self-supervised pre-trained framework
    Yang, Minghao
    Zhang, Shichen
    Zheng, Zhihang
    Zhang, Pengfei
    Liang, Yan
    Tang, Shaojun
    NUCLEIC ACIDS RESEARCH, 2024, 52 (06)
  • [48] GhostEncoder: Stealthy backdoor attacks with dynamic triggers to pre-trained encoders in self-supervised learning
    Wang, Qiannan
    Yin, Changchun
    Fang, Liming
    Liu, Zhe
    Wang, Run
    Lin, Chenhao
    COMPUTERS & SECURITY, 2024, 142
  • [49] Prediction of Protein Tertiary Structure Using Pre-Trained Self-Supervised Learning Based on Transformer
    Kurniawan, Alif
    Jatmiko, Wisnu
    Hertadi, Rukman
    Habibie, Novian
    2020 5TH INTERNATIONAL WORKSHOP ON BIG DATA AND INFORMATION SECURITY (IWBIS 2020), 2020, : 75 - 80
  • [50] Can Self-Supervised Neural Representations Pre-Trained on Human Speech distinguish Animal Callers?
    Sarkar, Eklavya
    Magimai-Doss, Mathew
    INTERSPEECH 2023, 2023, : 1189 - 1193