Collective prompt tuning with relation inference for document-level relation extraction

被引:9
|
作者
Yuan, Changsen [1 ,2 ]
Cao, Yixin [3 ]
Huang, Heyan [1 ,2 ]
机构
[1] Beijing Inst Technol, Beijing, Peoples R China
[2] Zhongguancun, South St, Beijing, Peoples R China
[3] Singapore Management Univ, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
Natural language processing; Document-level relation extraction; Prompt-tuning; Various templates; Global reasoning;
D O I
10.1016/j.ipm.2023.103451
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Document-level relation extraction (RE) aims to extract the relation of entities that may be across sentences. Existing methods mainly rely on two types of techniques: Pre-trained language models (PLMs) and reasoning skills. Although various reasoning methods have been proposed, how to elicit learnt factual knowledge from PLMs for better reasoning ability has not yet been explored. In this paper, we propose a novel Collective Prompt Tuning with Relation Inference (CPT-RI) for Document-level RE, that improves upon existing models from two aspects. First, considering the long input and various templates, we adopt a collective prompt tuning method, which is an update-and-reuse strategy. A generic prompt is first encoded and then updated with exact entity pairs for relation-specific prompts. Second, we introduce a relation inference module to conduct global reasoning overall relation prompts via constrained semantic segmentation. Extensive experiments on two publicly available benchmark datasets demonstrate the effectiveness of our proposed CPT-RI as compared to the baseline model (ATLOP (Zhou et al., 2021)), which improve the 0.57% on the DocRED dataset, 2.20% on the CDR dataset, and 2.30 on the GDA dataset in the F1 score. In addition, further ablation studies also verify the effects of the collective prompt tuning and relation inference.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Learning Logic Rules for Document-level Relation Extraction
    Ru, Dongyu
    Sun, Changzhi
    Feng, Jiangtao
    Qiu, Lin
    Zhou, Hao
    Zhang, Weinan
    Yu, Yong
    Li, Lei
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1239 - 1250
  • [22] Document-level Relation Extraction With Entity and Context Information
    Huang, He-Yan
    Yuan, Chang-Sen
    Feng, Chong
    Zidonghua Xuebao/Acta Automatica Sinica, 2024, 50 (10): : 1953 - 1962
  • [23] HistRED: A Historical Document-Level Relation Extraction Dataset
    Yang, Soyoung
    Choi, Minseok
    Cho, Youngwoo
    Choo, Jaegul
    arXiv, 2023,
  • [24] Evidence-aware Document-level Relation Extraction
    Xu, Tianyu
    Hua, Wen
    Qu, Jianfeng
    Li, Zhixu
    Xu, Jiajie
    Liu, An
    Zhao, Lei
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2311 - 2320
  • [25] A Hierarchical Network for Multimodal Document-Level Relation Extraction
    Kong, Lingxing
    Wang, Jiuliang
    Ma, Zheng
    Zhou, Qifeng
    Zhang, Jianbing
    He, Liang
    Chen, Jiajun
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 18408 - 18416
  • [26] A Document-Level Relation Extraction Framework with Dynamic Pruning
    Zhang, Hanyue
    Li, Li
    Shen, Jun
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VIII, 2023, 14261 : 13 - 25
  • [27] Rethinking Document-Level Relation Extraction: A Reality Check
    Li, Jing
    Wang, Yequan
    Zhang, Shuai
    Zhang, Min
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 5715 - 5730
  • [28] HistRED: A Historical Document-Level Relation Extraction Dataset
    Yang, Soyoung
    Choi, Minseok
    Cho, Youngwoo
    Choo, Jaegul
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 3207 - 3224
  • [29] Document-level Relation Extraction via Separate Relation Representation and Logical Reasoning
    Huang, Heyan
    Yuan, Changsen
    Liu, Qian
    Cao, Yixin
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (01)
  • [30] EIDER: Empowering Document-level Relation Extraction with Efficient Evidence Extraction and Inference-stage Fusion
    Xie, Yiqing
    Shen, Jiaming
    Li, Sha
    Mao, Yuning
    Han, Jiawei
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 257 - 268