PICKD: In-Situ Prompt Tuning for Knowledge-Grounded Dialogue Generation

被引:2
|
作者
Sarkar, Rajdeep [1 ]
Goswami, Koustava [2 ]
Arcan, Mihael [1 ]
McCrae, John [1 ]
机构
[1] Univ Galway, Galway, Ireland
[2] Adobe Res Bangalore, Bangalore, Karnataka, India
基金
爱尔兰科学基金会;
关键词
Language Model Prompting; Knowledge grounded Dialogue Systems; Knowledge Graphs;
D O I
10.1007/978-3-031-33383-5_10
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generating informative, coherent and fluent responses to user queries is challenging yet critical for a rich user experience and the eventual success of dialogue systems. Knowledge-grounded dialogue systems leverage external knowledge to induce relevant facts in a dialogue. These systems need to understand the semantic relatedness between the dialogue context and the available knowledge, thereby utilising this information for response generation. Although various innovative models have been proposed, they neither utilise the semantic entailment between the dialogue history and the knowledge nor effectively process knowledge from both structured and unstructured sources. In this work, we propose PICKD, a two-stage framework for knowledgeable dialogue. The first stage involves the Knowledge Selector choosing knowledge pertinent to the dialogue context from both structured and unstructured knowledge sources. PICKD leverages novel In-Situ prompt tuning for knowledge selection, wherein prompt tokens are injected into the dialogue-knowledge text tokens during knowledge retrieval. The second stage employs the Response Generator for generating fluent and factual responses by utilising the retrieved knowledge and the dialogue context. Extensive experiments on three domain-specific datasets exhibit the effectiveness of PICKD over other baseline methodologies for knowledge-grounded dialogue. The source is available at https://github.com/rajbsk/pickd.
引用
收藏
页码:124 / 136
页数:13
相关论文
共 50 条
  • [41] Building knowledge-grounded dialogue systems with graph-based semantic modelling
    Yang, Yizhe
    Huang, Heyan
    Gao, Yang
    Li, Jiawei
    KNOWLEDGE-BASED SYSTEMS, 2024, 298
  • [42] Is a Knowledge-based Response Engaging?: An Analysis on Knowledge-Grounded Dialogue with Information Source Annotation
    Kodama, Takashi
    Kiyomaru, Hirokazu
    Huang, Yin Jou
    Okahisa, Taro
    Kurohashi, Sadao
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-SRW 2023, VOL 4, 2023, : 237 - 243
  • [43] Retrieval-Augmented Response Generation for Knowledge-Grounded Conversation in the Wild
    Ahn, Yeonchan
    Lee, Sang-Goo
    Shim, Junho
    Park, Jaehui
    IEEE ACCESS, 2022, 10 : 131374 - 131385
  • [44] Graph-Structured Context Understanding for Knowledge-grounded Response Generation
    Li, Yanran
    Li, Wenjie
    Wang, Zhitao
    SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 1930 - 1934
  • [45] There Are a Thousand Hamlets in a Thousand People's Eyes: Enhancing Knowledge-grounded Dialogue with Personal Memory
    Fu, Tingchen
    Zhao, Xueliang
    Tao, Chongyang
    Wen, Ji-Rong
    Yan, Rui
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3901 - 3913
  • [46] Well Begun is Half Done: Generator-agnostic Knowledge Pre-Selection for Knowledge-Grounded Dialogue
    Qin, Lang
    Zhang, Yao
    Liang, Hongru
    Wang, Jun
    Yang, Zhenglu
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 4696 - 4709
  • [47] KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation
    Chen, Wenhu
    Su, Yu
    Yan, Xifeng
    Wang, William Yang
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 8635 - 8648
  • [48] Knowledge-Grounded Response Generation with Deep Attentional Latent-Variable Model
    Ye, Hao-Tong
    Lo, Kai-Ling
    Su, Shang-Yu
    Chen, Yun-Nung
    COMPUTER SPEECH AND LANGUAGE, 2020, 63
  • [49] Overcoming Rigid and Monotonous: Enhancing Knowledge-Grounded Conversation Generation via Multi-granularity Knowledge
    Zhang, Xingsheng
    Deng, YiFan
    Hu, Yue
    Li, Yunpeng
    Guo, Ping
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT I, NLPCC 2024, 2025, 15359 : 3 - 15
  • [50] EARL: Informative Knowledge-Grounded Conversation Generation with Entity-Agnostic Representation Learning
    Zhou, Hao
    Huang, Minlie
    Liu, Yong
    Chen, Wei
    Zhu, Xiaoyan
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 2383 - 2395