Enhancing relation extraction using multi-task learning with SDP evidence

被引:0
|
作者
Wang, Hailin [1 ,2 ]
Zhang, Dan [1 ,2 ]
Liu, Guisong [1 ,2 ]
Huang, Li [1 ,2 ]
Qin, Ke [3 ]
机构
[1] Southwestern Univ Finance & Econ, Sch Comp & Artificial Intelligence, Complex Lab New Finance & Econ, Chengdu 611130, Peoples R China
[2] Kash Inst Elect & Informat Ind, Kashgar, Xinjiang, Peoples R China
[3] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Relation extraction; Multi-task learning; Shortest dependency path; Evidence; ATTENTION; MODEL;
D O I
10.1016/j.ins.2024.120610
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Relation extraction (RE) is a crucial subtask of information extraction, which involves recognizing the relation between entity pairs in a sentence. Previous studies have extensively employed syntactic information, notably the shortest dependency path (SDP), to collect word evidence, termed SDP evidence, which gives clues about the given entity pair, thus improving RE. Nevertheless, prevalent transformer -based techniques lack syntactic information and cannot effectively model essential syntactic clues to support relations. This study exerts multi -task learning to address these issues by imbibing an SDP token position prediction task into the RE task. To this end, we introduce SGA, an SDP evidence guiding approach that transfers the SDP evidence into two novel supervisory signal labels: SDP tokens label and SDP matrix label. The former guides the attention modules to assign high attention weights to SDP token positions, emphasizing relational clues. In the meantime, the latter supervises SGA to predict a parameterized asymmetric product matrix among the SDP tokens for RE. Experimental outcomes demonstrate the model's enhanced ability to leverage SDP information, thereby directing attention modules and predicted matrix labels to focus on SDP evidence. Consequently, our proposed approach surpasses existing publicly available optimal baselines across four RE datasets: SemEval2010-Task8, KBP37, NYT, and WebNLG. 1
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Power text information extraction based on multi-task learning
    Ji, Xin
    Wu, Tongxin
    Yu, Ting
    Dong, Linxiao
    Chen, Yiting
    Mi, Na
    Zhao, Jiakui
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2024, 50 (08): : 2461 - 2469
  • [32] Task Aware Feature Extraction Framework for Sequential Dependence Multi-Task Learning
    Tao, Xuewen
    Ha, Mingming
    Guo, Xiaobo
    Ma, Qiongxu
    Cheng, Hongwei
    Lin, Wenfang
    Cheng, Linxun
    Han, Bing
    PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 151 - 160
  • [33] Multi-task Learning for Features Extraction in Financial Annual Reports
    Montariol, Syrielle
    Martinc, Matej
    Pelicon, Andraz
    Pollak, Senja
    Koloski, Boshko
    Loncarski, Igor
    Valentincic, Aljosa
    Sustar, Katarina Sitar
    Ichev, Riste
    Znidarsic, Martin
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT II, 2023, 1753 : 7 - 24
  • [34] Multi-Task Learning Using Shared and Task Specific Information
    Srijith, P. K.
    Shevade, Shirish
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT III, 2012, 7665 : 125 - 132
  • [35] Enhancing stance detection through sequential weighted multi-task learning
    Alturayeif, Nora
    Luqman, Hamzah
    Ahmed, Moataz
    SOCIAL NETWORK ANALYSIS AND MINING, 2023, 14 (01)
  • [36] Multi-task and multi-view training for end-to-end relation extraction
    Zhang, Junchi
    Zhang, Yue
    Ji, Donghong
    Liu, Mengchi
    NEUROCOMPUTING, 2019, 364 : 245 - 253
  • [37] Enhancing Emotion Prediction in Multimedia Content Through Multi-Task Learning
    Fan, Wan
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2025, 16 (02) : 1198 - 1209
  • [38] Enhancing Demand Prediction: A Multi-Task Learning Approach for Taxis and TNCs
    Guo, Yujie
    Chen, Ying
    Zhang, Yu
    SUSTAINABILITY, 2024, 16 (05)
  • [39] Learning to Branch for Multi-Task Learning
    Guo, Pengsheng
    Lee, Chen-Yu
    Ulbricht, Daniel
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [40] Learning to Branch for Multi-Task Learning
    Guo, Pengsheng
    Lee, Chen-Yu
    Ulbricht, Daniel
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,