Collective prompt tuning with relation inference for document-level relation extraction

被引:9
|
作者
Yuan, Changsen [1 ,2 ]
Cao, Yixin [3 ]
Huang, Heyan [1 ,2 ]
机构
[1] Beijing Inst Technol, Beijing, Peoples R China
[2] Zhongguancun, South St, Beijing, Peoples R China
[3] Singapore Management Univ, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
Natural language processing; Document-level relation extraction; Prompt-tuning; Various templates; Global reasoning;
D O I
10.1016/j.ipm.2023.103451
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Document-level relation extraction (RE) aims to extract the relation of entities that may be across sentences. Existing methods mainly rely on two types of techniques: Pre-trained language models (PLMs) and reasoning skills. Although various reasoning methods have been proposed, how to elicit learnt factual knowledge from PLMs for better reasoning ability has not yet been explored. In this paper, we propose a novel Collective Prompt Tuning with Relation Inference (CPT-RI) for Document-level RE, that improves upon existing models from two aspects. First, considering the long input and various templates, we adopt a collective prompt tuning method, which is an update-and-reuse strategy. A generic prompt is first encoded and then updated with exact entity pairs for relation-specific prompts. Second, we introduce a relation inference module to conduct global reasoning overall relation prompts via constrained semantic segmentation. Extensive experiments on two publicly available benchmark datasets demonstrate the effectiveness of our proposed CPT-RI as compared to the baseline model (ATLOP (Zhou et al., 2021)), which improve the 0.57% on the DocRED dataset, 2.20% on the CDR dataset, and 2.30 on the GDA dataset in the F1 score. In addition, further ablation studies also verify the effects of the collective prompt tuning and relation inference.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Enhancing Document-Level Relation Extraction with Entity Pronoun Resolution and Relation Correlation
    Pi, Qiankun
    Lu, Jicang
    Sun, Yepeng
    Zhu, Taojie
    Xia, Yi
    Yang, Chenguang
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT II, NLPCC 2024, 2025, 15360 : 174 - 186
  • [32] TDGI: Translation-Guided Double-Graph Inference for Document-Level Relation Extraction
    Zhang, Lingling
    Zhong, Yujie
    Zheng, Qinghua
    Liu, Jun
    Wang, Qianying
    Wang, Jiaxin
    Chang, Xiaojun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (04) : 2647 - 2659
  • [33] Improving inference via rich path information and logic rules for document-level relation extraction
    Su, Huizhe
    Xie, Shaorong
    Yu, Hang
    Yuan, Changsen
    Wang, Xinzhi
    Luo, Xiangfeng
    KNOWLEDGE AND INFORMATION SYSTEMS, 2025, : 4207 - 4231
  • [34] Evidence Reasoning and Curriculum Learning for Document-Level Relation Extraction
    Xu, Tianyu
    Qu, Jianfeng
    Hua, Wen
    Li, Zhixu
    Xu, Jiajie
    Liu, An
    Zhao, Lei
    Zhou, Xiaofang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (02) : 594 - 607
  • [35] Double Graph Based Reasoning for Document-level Relation Extraction
    Zeng, Shuang
    Xu, Runxin
    Chang, Baobao
    Li, Lei
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 1630 - 1640
  • [36] Document-Level Relation Extraction with Entity Enhancement and Context Refinement
    Zou, Meng
    Yang, Qiang
    Qu, Jianfeng
    Li, Zhixu
    Liu, An
    Zhao, Lei
    Chen, Zhigang
    WEB INFORMATION SYSTEMS ENGINEERING - WISE 2021, PT II, 2021, 13081 : 347 - 362
  • [37] A Personalized Federated Framework for Document-level Biomedical Relation Extraction
    Xiao, Yan
    Jin, Yaochu
    Zhang, Haoyu
    Huo, Xu
    Liu, Qiqi
    Zheng, Zeqi
    2024 6TH INTERNATIONAL CONFERENCE ON DATA-DRIVEN OPTIMIZATION OF COMPLEX SYSTEMS, DOCS 2024, 2024, : 457 - 461
  • [38] Enhancing Document-Level Relation Extraction by Entity Knowledge Injection
    Wang, Xinyi
    Wang, Zitao
    Sun, Weijian
    Hu, Wei
    SEMANTIC WEB - ISWC 2022, 2022, 13489 : 39 - 56
  • [39] Modular Self-Supervision for Document-Level Relation Extraction
    Zhang, Sheng
    Wong, Cliff
    Usuyama, Naoto
    Jain, Sarthak
    Naumann, Tristan
    Poon, Hoifung
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 5291 - 5302
  • [40] Towards Integration of Discriminability and Robustness for Document-Level Relation Extraction
    Guo, Jia
    Kok, Stanley
    Bing, Lidong
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2606 - 2617