Recall, Retrieve and Reason: Towards Better In-Context Relation Extraction

被引:0
|
作者
Li, Guozheng [1 ]
Wang, Peng [1 ,2 ]
Ke, Wenjun [1 ,2 ]
Guo, Yikai [3 ]
Ji, Ke [1 ]
Shang, Ziyu [1 ]
Liu, Jiajun [1 ]
Xu, Zijie [1 ]
机构
[1] Southeast Univ, Sch Comp Sci & Engn, Nanjing, Jiangsu, Peoples R China
[2] Southeast Univ, Minist Educ, Key Lab New Generat Artificial Intelligence Techn, Nanjing, Jiangsu, Peoples R China
[3] Beijing Inst Comp Technol & Applicat, Beijing, Peoples R China
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Relation extraction (RE) aims to identify relations between entities mentioned in texts. Although large language models (LLMs) have demonstrated impressive in-context learning (ICL) abilities in various tasks, they still suffer from poor performances compared to most supervised fine-tuned RE methods. Utilizing ICL for RE with LLMs encounters two challenges: (1) retrieving good demonstrations from training examples, and (2) enabling LLMs exhibit strong ICL abilities in RE. On the one hand, retrieving good demonstrations is a non-trivial process in RE, which easily results in low relevance regarding entities and relations. On the other hand, ICL with an LLM achieves poor performance in RE while RE is different from language modeling in nature or the LLM is not large enough. In this work, we propose a novel recall-retrieve-reason RE framework that synergizes LLMs with retrieval corpora (training examples) to enable relevant retrieving and reliable in-context reasoning. Specifically, we distill the consistently ontological knowledge from training datasets to let LLMs generate relevant entity pairs grounded by retrieval corpora as valid queries. These entity pairs are then used to retrieve relevant training examples from the retrieval corpora as demonstrations for LLMs to conduct better ICL via instruction tuning. Extensive experiments on different LLMs and RE datasets demonstrate that our method generates relevant and valid entity pairs and boosts ICL abilities of LLMs, achieving competitive or new state-of-the-art performance on sentence-level RE compared to previous supervised fine-tuning methods and ICL-based methods.
引用
收藏
页码:6368 / 6376
页数:9
相关论文
共 50 条
  • [1] Learning To Retrieve Prompts for In-Context Learning
    Rubin, Ohad
    Herzig, Jonathan
    Berant, Jonathan
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 2655 - 2671
  • [2] Learning to Retrieve In-Context Examples for Large Language Models
    Wang, Liang
    Yang, Nan
    Wei, Furu
    PROCEEDINGS OF THE 18TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 1752 - 1767
  • [3] Towards In-context Scene Understanding
    Balazevic, Ivana
    Steiner, David
    Parthasarathy, Nikhil
    Arandjelovic, Relja
    Henaff, Olivier J.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] Guideline Learning for In-Context Information Extraction
    Pang, Chaoxu
    Cao, Yixuan
    Ding, Qiang
    Luo, Ping
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 15372 - 15389
  • [5] GPT-RE: In-context Learning for Relation Extraction using Large Language Models
    Wan, Zhen
    Cheng, Fei
    Mao, Zhuoyuan
    Liu, Qianying
    Song, Haiyue
    Li, Jiwei
    Kurohashi, Sadao
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3534 - 3547
  • [6] Towards More Unified In-context Visual Understanding
    Sheng, Dianmo
    Chen, Dongdong
    Tan, Zhentao
    Liu, Qiankun
    Chu, Qi
    Bao, Jianmin
    Gong, Tao
    Liu, Bin
    Xu, Shengwei
    Yu, Nenghai
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 13362 - 13372
  • [7] On the Relation between Sensitivity and Accuracy in In-Context Learning
    Chen, Yanda
    Zhao, Chen
    Yu, Zhou
    McKeown, Kathleen
    He, He
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 155 - 167
  • [8] Linear Transformers with Learnable Kernel Functions are Better In-Context Models
    Aksenov, Yaroslav
    Balagansky, Nikita
    Vaina, Sofia Maria Lo Cicero
    Shaposhnikov, Boris
    Gorbatov, Alexey
    Gavrilov, Daniil
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 9584 - 9597
  • [9] Multilingual LLMs are Better Cross-lingual In-context Learners with Alignment
    Tanwar, Eshaan
    Dutta, Subhabrata
    Borthakur, Manish
    Chakraborty, Tanmoy
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 6292 - 6307
  • [10] Towards the Effect of Examples on In-Context Learning: A Theoretical Case Study
    He, Pengfei
    Cui, Yingqian
    Xu, Han
    Liu, Hui
    Yamada, Makoto
    Tang, Jiliang
    Xing, Yue
    STAT, 2025, 14 (01):