Grounding Dialogue Systems via Knowledge Graph Aware Decoding with Pre-trained Transformers

被引:3
|
作者
Chaudhuri, Debanjan [2 ]
Rony, Md Rashad Al Hasan [1 ]
Lehmann, Jens [1 ,2 ]
机构
[1] Fraunhofer IAIS, Dresden, Germany
[2] Univ Bonn, Smart Data Analyt Grp, Bonn, Germany
来源
SEMANTIC WEB, ESWC 2021 | 2021年 / 12731卷
关键词
Knowledge graph; Dialogue system; Graph encoding; Knowledge integration;
D O I
10.1007/978-3-030-77385-4_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generating knowledge grounded responses in both goal and non-goal oriented dialogue systems is an important research challenge. Knowledge Graphs (KG) can be viewed as an abstraction of the real world, which can potentially facilitate a dialogue system to produce knowledge grounded responses. However, integrating KGs into the dialogue generation process in an end-to-end manner is a non-trivial task. This paper proposes a novel architecture for integrating KGs into the response generation process by training a BERT model that learns to answer using the elements of the KG (entities and relations) in a multi-task, end-to-end setting. The k-hop subgraph of the KG is incorporated into the model during training and inference using Graph Laplacian. Empirical evaluation suggests that the model achieves better knowledge groundedness (measured via Entity F1 score) compared to other state-of-the-art models for both goal and non-goal oriented dialogues.
引用
收藏
页码:323 / 339
页数:17
相关论文
共 50 条
  • [31] NMT Enhancement based on Knowledge Graph Mining with Pre-trained Language Model
    Yang, Hao
    Qin, Ying
    Deng, Yao
    Wang, Minghan
    2020 22ND INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): DIGITAL SECURITY GLOBAL AGENDA FOR SAFE SOCIETY!, 2020, : 185 - 189
  • [32] Zero-shot Mathematical Problem Solving via Generative Pre-trained Transformers
    Galatolo, Federico A.
    Cimino, Mario G. C. A.
    Vaglini, Gigliola
    ICEIS: PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS - VOL 1, 2022, : 479 - 483
  • [33] Knowledge Base Grounded Pre-trained Language Models via Distillation
    Sourty, Raphael
    Moreno, Jose G.
    Servant, Francois-Paul
    Tamine, Lynda
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1617 - 1625
  • [34] Enhancing Chinese Pre-trained Language Model via Heterogeneous Linguistics Graph
    Li, Yanzeng
    Cao, Jiangxia
    Cong, Xin
    Zhang, Zhenyu
    Yu, Bowen
    Zhu, Hongsong
    Liu, Tingwen
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1986 - 1996
  • [35] Fusing graph structural information with pre-trained generative model for knowledge graph-to-text generation
    Shi, Xiayang
    Xia, Zhenlin
    Li, Yinlin
    Wang, Xuhui
    Niu, Yufeng
    KNOWLEDGE AND INFORMATION SYSTEMS, 2025, 67 (03) : 2619 - 2640
  • [36] Towards Summarizing Code Snippets Using Pre-Trained Transformers
    Mastropaolo, Antonio
    Ciniselli, Matteo
    Pascarella, Luca
    Tufano, Rosalia
    Aghajani, Emad
    Bavota, Gabriele
    PROCEEDINGS 2024 32ND IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION, ICPC 2024, 2024, : 1 - 12
  • [37] Classifying microfossil radiolarians on fractal pre-trained vision transformers
    Mimura, Kazuhide
    Itaki, Takuya
    Kataoka, Hirokatsu
    Miyakawa, Ayumu
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [38] Quantifying Valence and Arousal in Text with Multilingual Pre-trained Transformers
    Mendes, Goncalo Azevedo
    Martins, Bruno
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2023, PT I, 2023, 13980 : 84 - 100
  • [39] Introducing pre-trained transformers for high entropy alloy informatics
    Kamnis, Spyros
    MATERIALS LETTERS, 2024, 358
  • [40] Introducing pre-trained transformers for high entropy alloy informatics
    Kamnis, Spyros
    Materials Letters, 2024, 358