Collective prompt tuning with relation inference for document-level relation extraction

被引:9
|
作者
Yuan, Changsen [1 ,2 ]
Cao, Yixin [3 ]
Huang, Heyan [1 ,2 ]
机构
[1] Beijing Inst Technol, Beijing, Peoples R China
[2] Zhongguancun, South St, Beijing, Peoples R China
[3] Singapore Management Univ, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
Natural language processing; Document-level relation extraction; Prompt-tuning; Various templates; Global reasoning;
D O I
10.1016/j.ipm.2023.103451
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Document-level relation extraction (RE) aims to extract the relation of entities that may be across sentences. Existing methods mainly rely on two types of techniques: Pre-trained language models (PLMs) and reasoning skills. Although various reasoning methods have been proposed, how to elicit learnt factual knowledge from PLMs for better reasoning ability has not yet been explored. In this paper, we propose a novel Collective Prompt Tuning with Relation Inference (CPT-RI) for Document-level RE, that improves upon existing models from two aspects. First, considering the long input and various templates, we adopt a collective prompt tuning method, which is an update-and-reuse strategy. A generic prompt is first encoded and then updated with exact entity pairs for relation-specific prompts. Second, we introduce a relation inference module to conduct global reasoning overall relation prompts via constrained semantic segmentation. Extensive experiments on two publicly available benchmark datasets demonstrate the effectiveness of our proposed CPT-RI as compared to the baseline model (ATLOP (Zhou et al., 2021)), which improve the 0.57% on the DocRED dataset, 2.20% on the CDR dataset, and 2.30 on the GDA dataset in the F1 score. In addition, further ablation studies also verify the effects of the collective prompt tuning and relation inference.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] CorefDRE: Coref-Aware Document-Level Relation Extraction
    Xue, Zhongxuan
    Zhong, Jiang
    Dai, Qizhu
    Li, Rongzhen
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2022, PT III, 2022, 13370 : 116 - 128
  • [42] Document-level Relation Extraction with Progressive Self-distillation
    Wang, Quan
    Mao, Zhendong
    Gao, Jie
    Zhang, Yongdong
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (06)
  • [43] Document-level relation extraction with Entity-Selection Attention
    Yuan, Changsen
    Huang, Heyan
    Feng, Chong
    Shi, Ge
    Wei, Xiaochi
    INFORMATION SCIENCES, 2021, 568 : 163 - 174
  • [44] DoreBer: Document-Level Relation Extraction Method Based on BernNet
    Yuan, Boya
    Xu, Liwen
    IEEE ACCESS, 2023, 11 : 136468 - 136477
  • [45] Document-Level Event Temporal Relation Extraction with Context Information
    Wang J.
    Shi C.
    Zhang J.
    Yu X.
    Liu Y.
    Cheng X.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (11): : 2475 - 2484
  • [46] Document-Level Relation Extraction with Structure Enhanced Transformer Encoder
    Liu, Wanlong
    Zhou, Li
    Zeng, Dingyi
    Qu, Hong
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [47] Document-level Relation Extraction with Entity Interaction and Commonsense Knowledge
    Liu, Shen
    Shen, Xinshu
    Liu, Tingting
    Lan, Man
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [48] Document-level relation extraction with entity mentions deep attention
    Xu, Yangsheng
    Tian, Jiaxin
    Tang, Mingwei
    Tao, Linping
    Wang, Liuxuan
    COMPUTER SPEECH AND LANGUAGE, 2024, 84
  • [49] Document-Level Relation Extraction with Deep Gated Graph Reasoning
    Liang, Zeyu
    INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS AND KNOWLEDGE-BASED SYSTEMS, 2024, 32 (07) : 1037 - 1050
  • [50] Three-stage document-level entity relation extraction
    Lu, Ben
    Wang, Xianchuan
    Ming, Wenkai
    Wang, Xianchao
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (04):