Grounding Dialogue Systems via Knowledge Graph Aware Decoding with Pre-trained Transformers

被引:3
|
作者
Chaudhuri, Debanjan [2 ]
Rony, Md Rashad Al Hasan [1 ]
Lehmann, Jens [1 ,2 ]
机构
[1] Fraunhofer IAIS, Dresden, Germany
[2] Univ Bonn, Smart Data Analyt Grp, Bonn, Germany
来源
SEMANTIC WEB, ESWC 2021 | 2021年 / 12731卷
关键词
Knowledge graph; Dialogue system; Graph encoding; Knowledge integration;
D O I
10.1007/978-3-030-77385-4_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generating knowledge grounded responses in both goal and non-goal oriented dialogue systems is an important research challenge. Knowledge Graphs (KG) can be viewed as an abstraction of the real world, which can potentially facilitate a dialogue system to produce knowledge grounded responses. However, integrating KGs into the dialogue generation process in an end-to-end manner is a non-trivial task. This paper proposes a novel architecture for integrating KGs into the response generation process by training a BERT model that learns to answer using the elements of the KG (entities and relations) in a multi-task, end-to-end setting. The k-hop subgraph of the KG is incorporated into the model during training and inference using Graph Laplacian. Empirical evaluation suggests that the model achieves better knowledge groundedness (measured via Entity F1 score) compared to other state-of-the-art models for both goal and non-goal oriented dialogues.
引用
收藏
页码:323 / 339
页数:17
相关论文
共 50 条
  • [1] Grounding Dialogue History: Strengths and Weaknesses of Pre-trained Transformers
    Greco, Claudio
    Testoni, Alberto
    Bernardi, Raffaella
    AIXIA 2020 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 12414 : 263 - 279
  • [2] Are Pre-trained Convolutions Better than Pre-trained Transformers?
    Tay, Yi
    Dehghani, Mostafa
    Gupta, Jai
    Aribandi, Vamsi
    Bahri, Dara
    Qin, Zhen
    Metzler, Donald
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 4349 - 4359
  • [3] Calibration of Pre-trained Transformers
    Desai, Shrey
    Durrett, Greg
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 295 - 302
  • [4] Knowledge Grounded Pre-Trained Model For Dialogue Response Generation
    Wang, Yanmeng
    Rong, Wenge
    Zhang, Jianfei
    Ouyang, Yuanxin
    Xiong, Zhang
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [5] CopiFilter: An Auxiliary Module Adapts Pre-trained Transformers for Medical Dialogue Summarization
    Duan, Jiaxin
    Liu, Junfei
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV, 2023, 14257 : 99 - 114
  • [6] Emergent Modularity in Pre-trained Transformers
    Zhang, Zhengyan
    Zeng, Zhiyuan
    Lin, Yankai
    Xiao, Chaojun
    Wang, Xiaozhi
    Han, Xu
    Liu, Zhiyuan
    Xie, Ruobing
    Sun, Maosong
    Zhou, Jie
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 4066 - 4083
  • [7] Pre-trained transformers: an empirical comparison
    Casola, Silvia
    Lauriola, Ivano
    Lavelli, Alberto
    MACHINE LEARNING WITH APPLICATIONS, 2022, 9
  • [8] Knowledge graph extension with a pre-trained language model via unified learning method
    Choi, Bonggeun
    Ko, Youngjoong
    KNOWLEDGE-BASED SYSTEMS, 2023, 262
  • [9] Knowledge-Grounded Dialogue Generation with Pre-trained Language Models
    Zhao, Xueliang
    Wu, Wei
    Xu, Can
    Tao, Chongyang
    Zhao, Dongyan
    Yan, Rui
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 3377 - 3390
  • [10] Unsupervised Out-of-Domain Detection via Pre-trained Transformers
    Xu, Keyang
    Ren, Tongzheng
    Zhang, Shikun
    Feng, Yihao
    Xiong, Caiming
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 1052 - 1061