CollRec: Pre-Trained Language Models and Knowledge Graphs Collaborate to Enhance Conversational Recommendation System

被引:0
|
作者
Liu, Shuang [1 ]
Ao, Zhizhuo [1 ]
Chen, Peng [2 ]
Kolmanic, Simon [3 ]
机构
[1] Dalian Minzu Univ, Sch Comp Sci & Engn, Dalian 116600, Peoples R China
[2] Dalian Neusoft Univ Informat, Sch Comp & Software, Dalian 116023, Peoples R China
[3] Univ Maribor, Fac Elect Engn & Comp Sci, Maribor 2000, Slovenia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Knowledge graphs; Oral communication; Task analysis; Recommender systems; Motion pictures; Costs; Accuracy; Large language models; Conversational recommendation system; knowledge graph; large language model; end-to-end generation; fine-tuning; ReDial; WebNLG; 2020; challenge;
D O I
10.1109/ACCESS.2024.3434720
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Existing conversational recommender systems (CRS) use insufficient generality in incorporating external information using knowledge graphs. The recommendation module and generation module are loosely connected during model training and shallowly integrated during inference. A simple switching or copying mechanism is used to merge recommended items into generated responses. These problems significantly degrade the recommendation performance. To alleviate this problem, we propose a novel unified framework for collaboratively enhancing conversational recommendations using pre-trained language models and knowledge graphs (CollRec). We use a fine-tuned pre-trained language model to efficiently extract knowledge graphs from conversational text descriptions, perform entity-based recommendations based on the generated graph nodes and edges, and fine-tune a large-scale pre-trained language model to generate fluent and diverse responses. Experimental results on the WebNLG 2020 Challenge dataset, ReDial dataset, and Reddit-Movie dataset show that our CollRec model significantly outperforms the state-of-the-art methods.
引用
收藏
页码:104663 / 104675
页数:13
相关论文
共 50 条
  • [21] Enhancing pre-trained language models with Chinese character morphological knowledge
    Zheng, Zhenzhong
    Wu, Xiaoming
    Liu, Xiangzhi
    INFORMATION PROCESSING & MANAGEMENT, 2025, 62 (01)
  • [22] Gauging, enriching and applying geography knowledge in Pre-trained Language Models
    Ramrakhiyani, Nitin
    Varma, Vasudeva
    Palshikar, Girish Keshav
    Pawar, Sachin
    INFORMATION PROCESSING & MANAGEMENT, 2025, 62 (01)
  • [23] Knowledge Base Grounded Pre-trained Language Models via Distillation
    Sourty, Raphael
    Moreno, Jose G.
    Servant, Francois-Paul
    Tamine, Lynda
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1617 - 1625
  • [24] Knowledge-Grounded Dialogue Generation with Pre-trained Language Models
    Zhao, Xueliang
    Wu, Wei
    Xu, Can
    Tao, Chongyang
    Zhao, Dongyan
    Yan, Rui
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 3377 - 3390
  • [25] Leveraging Pre-trained Language Models for Time Interval Prediction in Text-Enhanced Temporal Knowledge Graphs
    Islakoglu, Duygu Sezen
    Chekol, Melisachew Wudage
    Velegrakis, Yannis
    SEMANTIC WEB, PT I, ESWC 2024, 2024, 14664 : 59 - 78
  • [26] Annotating Columns with Pre-trained Language Models
    Suhara, Yoshihiko
    Li, Jinfeng
    Li, Yuliang
    Zhang, Dan
    Demiralp, Cagatay
    Chen, Chen
    Tan, Wang-Chiew
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 1493 - 1503
  • [27] LaoPLM: Pre-trained Language Models for Lao
    Lin, Nankai
    Fu, Yingwen
    Yang, Ziyu
    Chen, Chuwei
    Jiang, Shengyi
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6506 - 6512
  • [28] Deciphering Stereotypes in Pre-Trained Language Models
    Ma, Weicheng
    Scheible, Henry
    Wang, Brian
    Veeramachaneni, Goutham
    Chowdhary, Pratim
    Sung, Alan
    Koulogeorge, Andrew
    Wang, Lili
    Yang, Diyi
    Vosoughi, Soroush
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 11328 - 11345
  • [29] PhoBERT: Pre-trained language models for Vietnamese
    Dat Quoc Nguyen
    Anh Tuan Nguyen
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1037 - 1042
  • [30] HinPLMs: Pre-trained Language Models for Hindi
    Huang, Xixuan
    Lin, Nankai
    Li, Kexin
    Wang, Lianxi
    Gan, Suifu
    2021 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2021, : 241 - 246