Grounding Dialogue Systems via Knowledge Graph Aware Decoding with Pre-trained Transformers

被引:3
|
作者
Chaudhuri, Debanjan [2 ]
Rony, Md Rashad Al Hasan [1 ]
Lehmann, Jens [1 ,2 ]
机构
[1] Fraunhofer IAIS, Dresden, Germany
[2] Univ Bonn, Smart Data Analyt Grp, Bonn, Germany
来源
SEMANTIC WEB, ESWC 2021 | 2021年 / 12731卷
关键词
Knowledge graph; Dialogue system; Graph encoding; Knowledge integration;
D O I
10.1007/978-3-030-77385-4_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generating knowledge grounded responses in both goal and non-goal oriented dialogue systems is an important research challenge. Knowledge Graphs (KG) can be viewed as an abstraction of the real world, which can potentially facilitate a dialogue system to produce knowledge grounded responses. However, integrating KGs into the dialogue generation process in an end-to-end manner is a non-trivial task. This paper proposes a novel architecture for integrating KGs into the response generation process by training a BERT model that learns to answer using the elements of the KG (entities and relations) in a multi-task, end-to-end setting. The k-hop subgraph of the KG is incorporated into the model during training and inference using Graph Laplacian. Empirical evaluation suggests that the model achieves better knowledge groundedness (measured via Entity F1 score) compared to other state-of-the-art models for both goal and non-goal oriented dialogues.
引用
收藏
页码:323 / 339
页数:17
相关论文
共 50 条
  • [41] Sparse Pairwise Re-ranking with Pre-trained Transformers
    Gienapp, Lukas
    Froebe, Maik
    Hagen, Matthias
    Potthast, Martin
    PROCEEDINGS OF THE 2022 ACM SIGIR INTERNATIONAL CONFERENCE ON THE THEORY OF INFORMATION RETRIEVAL, ICTIR 2022, 2022, : 250 - 258
  • [42] An empirical study of pre-trained language models in simple knowledge graph question answering
    Hu, Nan
    Wu, Yike
    Qi, Guilin
    Min, Dehai
    Chen, Jiaoyan
    Pan, Jeff Z.
    Ali, Zafar
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2023, 26 (05): : 2855 - 2886
  • [43] BERT-MK: Integrating Graph Contextualized Knowledge into Pre-trained Language Models
    He, Bin
    Zhou, Di
    Xiao, Jinghui
    Jiang, Xin
    Liu, Qun
    Yuan, Nicholas Jing
    Xu, Tong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 2281 - 2290
  • [44] An empirical study of pre-trained language models in simple knowledge graph question answering
    Nan Hu
    Yike Wu
    Guilin Qi
    Dehai Min
    Jiaoyan Chen
    Jeff Z Pan
    Zafar Ali
    World Wide Web, 2023, 26 : 2855 - 2886
  • [45] KG-prompt: Interpretable knowledge graph prompt for pre-trained language models
    Chen, Liyi
    Liu, Jie
    Duan, Yutai
    Wang, Runze
    KNOWLEDGE-BASED SYSTEMS, 2025, 311
  • [46] DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering
    Cao, Qingqing
    Trivedi, Harsh
    Balasubramanian, Aruna
    Balasubramanian, Niranjan
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 4487 - 4497
  • [47] Towards Summarizing Code Snippets Using Pre-Trained Transformers
    Mastropaolo, Antonio
    Tufano, Rosalia
    Ciniselli, Matteo
    Aghajani, Emad
    Pascarella, Luca
    Bavota, Gabriele
    arXiv, 1600,
  • [48] Routing Generative Pre-Trained Transformers for Printed Circuit Board
    Wang, Hao
    Tu, Jun
    Bai, Shenglong
    Zheng, Jie
    Qian, Weikang
    Chen, Jienan
    2024 INTERNATIONAL SYMPOSIUM OF ELECTRONICS DESIGN AUTOMATION, ISEDA 2024, 2024, : 160 - 165
  • [49] Investor's ESG tendency probed by pre-trained transformers
    Li, Chao
    Keeley, Alexander Ryota
    Takeda, Shutaro
    Seki, Daikichi
    Managi, Shunsuke
    CORPORATE SOCIAL RESPONSIBILITY AND ENVIRONMENTAL MANAGEMENT, 2025, 32 (02) : 2051 - 2071
  • [50] TWilBert: Pre-trained deep bidirectional transformers for Spanish Twitter
    Gonzalez, Jose Angel
    Hurtado, Lluis-F.
    Pla, Ferran
    NEUROCOMPUTING, 2021, 426 : 58 - 69