TruthSR: Trustworthy Sequential Recommender Systems via User-generated Multimodal Content

被引:0
|
作者
Yan, Meng [1 ]
Huang, Haibin [1 ]
Liu, Ying [2 ]
Zhao, Juan [3 ]
Gao, Xiyue [1 ]
Xu, Cai [1 ]
Guan, Ziyu [1 ]
Zhao, Wei [1 ]
机构
[1] Xidian Univ, Xian, Peoples R China
[2] Northwest Univ, Xian, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
来源
DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT 3 | 2025年 / 14852卷
基金
中国国家自然科学基金;
关键词
User-generated content; Sequential recommender system; Trustworthy learning;
D O I
10.1007/978-981-97-5555-4_12
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Sequential recommender systems explore users' preferences and behavioral patterns from their historically generated data. Recently, researchers aim to improve sequential recommendation by utilizing massive user-generated multi-modal content, such as reviews, images, etc. This content often contains inevitable noise. Some studies attempt to reduce noise interference by suppressing cross-modal inconsistent information. However, they could potentially constrain the capturing of personalized user preferences. In addition, it is almost impossible to entirely eliminate noise in diverse user-generated multi-modal content. To solve these problems, we propose a trustworthy sequential recommendation method via noisy user-generated multi-modal content. Specifically, we explicitly capture the consistency and complementarity of user-generated multi-modal content to mitigate noise interference. We also achieve the modeling of the user's multi-modal sequential preferences. In addition, we design a trustworthy decision mechanism that integrates subjective user perspective and objective item perspective to dynamically evaluate the uncertainty of prediction results. Experimental evaluation on four widely-used datasets demonstrates the superior performance of our model compared to state-of-the-art methods. The code is released at https://github.com/FairyMeng/TrustSR.
引用
收藏
页码:180 / 195
页数:16
相关论文
共 50 条
  • [21] Assessing the Quality of User-Generated Content
    Stefan Winkler
    ZTE Communications, 2013, 11 (01) : 37 - 40
  • [22] The future of user-generated content is now
    Marino, Gregoire
    JOURNAL OF INTELLECTUAL PROPERTY LAW & PRACTICE, 2013, 8 (03) : 183 - 183
  • [23] Principles for Modeling User-Generated Content
    Lukyanenko, Roman
    Parsons, Jeffrey
    CONCEPTUAL MODELING, ER 2015, 2015, 9381 : 432 - 440
  • [24] A Solution for Navigating User-Generated Content
    Uusitalo, Severi
    Eskolin, Peter
    Belimpasakis, Petros
    2009 8TH IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY - SCIENCE AND TECHNOLOGY, 2009, : 219 - 220
  • [25] Generative AI in User-Generated Content
    Hua, Yiqing
    Niu, Shuo
    Cai, Jie
    Chilton, Lydia B.
    Heuer, Hendrik
    Wohn, Donghee Yvette
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [26] Extraversion as a stimulus for user-generated content
    Pagani, Margherita
    Goldsmith, Ronald E.
    Hofacker, Charles F.
    JOURNAL OF RESEARCH IN INTERACTIVE MARKETING, 2013, 7 (04) : 242 - 256
  • [27] Assessing the impact of a health intervention via user-generated Internet content
    Lampos, Vasileios
    Yom-Tov, Elad
    Pebody, Richard
    Cox, Ingemar J.
    DATA MINING AND KNOWLEDGE DISCOVERY, 2015, 29 (05) : 1434 - 1457
  • [28] Taming User-Generated Content in Mobile Networks via Drop Zones
    Trestian, Ionut
    Ranjan, Supranamaya
    Kuzmanovic, Aleksandar
    Nucci, Antonio
    2011 PROCEEDINGS IEEE INFOCOM, 2011, : 2840 - 2848
  • [29] Assessing the impact of a health intervention via user-generated Internet content
    Vasileios Lampos
    Elad Yom-Tov
    Richard Pebody
    Ingemar J. Cox
    Data Mining and Knowledge Discovery, 2015, 29 : 1434 - 1457
  • [30] Editorial: Online User Behavior and User-Generated Content
    Saura, Jose Ramon
    Dwivedi, Yogesh K.
    Palacios-Marques, Daniel
    FRONTIERS IN PSYCHOLOGY, 2022, 13