PokeMQA: Programmable knowledge editing for Multi-hop Question Answering

被引:0
|
作者
Gu, Hengrui [1 ]
Zhou, Kaixiong [2 ]
Han, Xiaotian [3 ]
Liu, Ninghao [4 ]
Wang, Ruobing [1 ]
Wang, Xin [1 ]
机构
[1] Jilin Univ, Sch Artificial Intelligence, Jilin, Peoples R China
[2] North Carolina State Univ, Dept Elect & Comp Engn, Raleigh, NC USA
[3] Texas A&M Univ, Dept Comp Sci & Engn, College Stn, TX USA
[4] Univ Georgia, Sch Comp, Athens, GA USA
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-hop question answering (MQA) is one of the challenging tasks to evaluate machine's comprehension and reasoning abilities, where large language models (LLMs) have widely achieved the human-comparable performance. Due to the dynamics of knowledge facts in real world, knowledge editing has been explored to update model with the up-to-date facts while avoiding expensive re-training or fine-tuning. Starting from the edited fact, the updated model needs to provide cascading changes in the chain of MQA. The previous art simply adopts a mix-up prompt to instruct LLMs conducting multiple reasoning tasks sequentially, including question decomposition, answer generation, and conflict checking via comparing with edited facts. However, the coupling of these functionally-diverse reasoning tasks inhibits LLMs' advantages in comprehending and answering questions while disturbing them with the unskilled task of conflict checking. We thus propose a framework, Programmable knowledge editing for Multi-hop Question Answering (PokeMQA), to decouple the jobs. Specifically, we prompt LLMs to decompose knowledge-augmented multi-hop question, while interacting with a detached trainable scope detector to modulate LLMs behavior depending on external conflict signal. The experiments on three LLM backbones and two benchmark datasets validate our superiority in knowledge editing of MQA, outperforming all competitors by a large margin in almost all settings and consistently producing reliable reasoning process. Our code is available at https://github.com/Hengrui-Gu/PokeMQA.
引用
收藏
页码:8069 / 8083
页数:15
相关论文
共 50 条
  • [1] Multi-Hop Reasoning for Question Answering with Knowledge Graph
    Zhang, Jiayuan
    Cai, Yifei
    Zhang, Qian
    Cao, Zehao
    Cheng, Zhenrong
    Li, Dongmei
    Meng, Xianghao
    2021 IEEE/ACIS 20TH INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION SCIENCE (ICIS 2021-SUMMER), 2021, : 121 - 125
  • [2] Multi-hop Question Answering
    Mavi, Vaibhav
    Jangra, Anubhav
    Jatowt, Adam
    FOUNDATIONS AND TRENDS IN INFORMATION RETRIEVAL, 2023, 17 (05): : 457 - 586
  • [3] Unsupervised Multi-hop Question Answering by Question Generation
    Pan, Liangming
    Chen, Wenhu
    Xiong, Wenhan
    Kan, Min-Yen
    Wang, William Yang
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 5866 - 5880
  • [4] Constraint-based Multi-hop Question Answering with Knowledge Graph
    Mitra, Sayantan
    Ramnani, Roshni
    Sengupta, Shubhashis
    2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, NAACL-HLT 2022, 2022, : 280 - 288
  • [5] Translational relation embeddings for multi-hop knowledge base question answering
    Li, Ziyan
    Wang, Haofen
    Zhang, Wenqiang
    JOURNAL OF WEB SEMANTICS, 2022, 74
  • [6] Multi-hop knowledge graph question answering based on deformed graph matching
    Li X.
    Fang Q.
    Hu J.
    Qian S.
    Xu C.
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2024, 50 (02): : 529 - 534
  • [7] A dynamic graph expansion network for multi-hop knowledge base question answering
    Wu, Wenqing
    Zhu, Zhenfang
    Qi, Jiangtao
    Wang, Wenling
    Zhang, Guangyuan
    Liu, Peiyu
    NEUROCOMPUTING, 2023, 515 : 37 - 47
  • [8] Multi-hop Knowledge Base Question Answering with an Iterative Sequence Matching Model
    Lan, Yunshi
    Wang, Shuohang
    Jiang, Jing
    2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, : 359 - 368
  • [9] Question Calibration and Multi-Hop Modeling for Temporal Question Answering
    Xue, Chao
    Liang, Di
    Wang, Pengfei
    Zhang, Jing
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 19332 - 19340
  • [10] Ask to Understand: Question Generation for Multi-hop Question Answering
    Li, Jiawei
    Ren, Mucheng
    Gao, Yang
    Yang, Yizhe
    CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 19 - 36