Can Large Language Models Assess Serendipity in Recommender Systems?

被引:0
|
作者
Tokutake, Yu [1 ]
Okamoto, Kazushi [1 ]
机构
[1] Univ Electrocommun, Grad Sch Informat & Engn, 1-5-1 Chofugaoka, Chofu, Tokyo 1828585, Japan
关键词
recommender system; serendipity; large lan- guage model; value judgment;
D O I
10.20965/jaciii.2024.p1263
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Serendipity-oriented recommender systems aim to counteract the overspecialization of user preferences. However, evaluating a user's serendipitous response to a recommended item can be challenging owing to its emotional nature. In this study, we address this issue by leveraging the rich knowledge of large language models (LLMs) that can perform various tasks. First, it explores the alignment between the serendipitous evaluations made by LLMs and those made by humans. In this study, a binary classification task was assigned to the LLMs to predict whether a user would find the recommended item serendipitously. The predictive performances of three LLMs were measured on a benchmark dataset in which humans assigned the ground truth to serendipitous items. The experimental findings revealed that LLM-based assessment methods do not have a very high agreement rate with human assessments. However, they performed as well as or better than the baseline methods. Further validation results indicate that the number of user rating histories provided to LLM prompts should be carefully chosen to avoid both insufficient and excessive inputs and that interpreting the output of LLMs showing high classification performance is difficult.
引用
收藏
页码:1263 / 1272
页数:10
相关论文
共 50 条
  • [41] A Framework to Assess the Adaptively of Recommender Systems
    Huo Hongmei
    Yang Da
    PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON INDUSTRIAL ENGINEERING AND ENGINEERING MANAGEMENT, VOLS A-C, 2008, : 1995 - 1998
  • [42] Assess and Summarize: Improve Outage Understanding with Large Language Models
    Jin, Pengxiang
    Zhang, Shenglin
    Ma, Minghua
    Li, Haozhe
    Kang, Yu
    Li, Liqun
    Liu, Yudong
    Qiao, Bo
    Zhang, Chaoyun
    Zhao, Pu
    He, Shilin
    Sarro, Federica
    Dang, Yingnong
    Rajmohan, Saravana
    Lin, Qingwei
    Zhang, Dongmei
    PROCEEDINGS OF THE 31ST ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2023, 2023, : 1657 - 1668
  • [43] AIREG: Enhanced Educational Recommender System with Large Language Models and Knowledge Graphs
    Fathi, Fatemeh
    SEMANTIC WEB: ESWC 2024 SATELLITE EVENTS, PT II, 2025, 15345 : 84 - 93
  • [44] Exploring the Potential of the Resolving Sets Model for Introducing Serendipity to Recommender Systems
    Tuval, Noa
    ACM UMAP '19: PROCEEDINGS OF THE 27TH ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, 2019, : 353 - 356
  • [45] From PARIS to LE-PARIS: toward patent response automation with recommender systems and collaborative large language models
    Chu, Jung-Mei
    Lo, Hao-Cheng
    Hsiang, Jieh
    Cho, Chun-Chieh
    ARTIFICIAL INTELLIGENCE AND LAW, 2024,
  • [46] Can We Edit Multimodal Large Language Models?
    Cheng, Siyuan
    Tian, Bozhong
    Liu, Qingbin
    Chen, Xi
    Wang, Yongheng
    Chen, Huajun
    Zhang, Ningyu
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 13877 - 13888
  • [47] Can large language models generate geospatial code?
    State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China
    不详
    arXiv, 1600,
  • [48] Can Large Language Models Assist in Hazard Analysis?
    Diemert, Simon
    Weber, Jens H.
    COMPUTER SAFETY, RELIABILITY, AND SECURITY, SAFECOMP 2023 WORKSHOPS, 2023, 14182 : 410 - 422
  • [49] Can Large Language Models Write Parallel Code?
    Nichols, Daniel
    Davis, Joshua H.
    Xie, Zhaojun
    Rajaram, Arjun
    Bhatele, Abhinav
    PROCEEDINGS OF THE 33RD INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE PARALLEL AND DISTRIBUTED COMPUTING, HPDC 2024, 2024,
  • [50] Large Language Models can Share Images, Too!
    Lee, Young-Jun
    Lee, Dokyong
    Sung, Joo Won
    Hyeon, Jonghwan
    Choi, Ho-Jin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 692 - 713