Improving graph collaborative filtering with multimodal-side-information-enriched contrastive learning

被引:2
|
作者
Shan, Lei [1 ]
Yuan, Huanhuan [1 ]
Zhao, Pengpeng [1 ]
Qu, Jianfeng [1 ]
Fang, Junhua [1 ]
Liu, Guanfeng [2 ]
Sheng, Victor S. [3 ]
机构
[1] Soochow Univ, Coll Comp Sci & Technol, Suzhou 21500, Jiangsu, Peoples R China
[2] Texas Tech Univ, Lubbock, TX 79409 USA
[3] Macquarie Univ, Dept Comp, Sydney, NSW, Australia
基金
中国国家自然科学基金;
关键词
Recommender systems; Collaborative filtering; Graph neural network; Multimodal recommendation; Contrastive learning;
D O I
10.1007/s10844-023-00807-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The multimodal side information such as images and text have been commonly used as supplements to improve graph collaborative filtering recommendations. However, there is often a semantic gap between multimodal information and collaborative filtering information. Previous works often directly fuse or align these information, which results in semantic distortion or degradation. Additionally, multimodal information also introduces additional noises, and previous methods lack explicit supervision to identify these noises. To tackle the issues, we propose a novel contrastive learning approach to improve graph collaborative filtering, named Multimodal-Side-Information-enriched Contrastive Learning (MSICL), which does not fuse multimodal information directly, but still explicitly captures users' potential preferences for similar images or text by contrasting ID embeddings, and filters noises in multimodal side information. Specifically, we first search for samples with similar images or text as positive contrastive pairs. Secondly, some searched sample pairs may be irrelevant, so we distinguish the noise by filtering out sample pairs that have no interaction relationship. Thirdly, we contrast the ID embeddings of the true positive sample pairs to excavate the potential similarity relationship in multimodal side information. Extensive experiments on three datasets demonstrate the superiority of our method in multimodal recommendation. Moreover, our approach significantly reduces computation and memory cost compared to previous work.
引用
收藏
页码:143 / 161
页数:19
相关论文
共 50 条
  • [31] Explainable Collaborative Filtering Recommendations Enriched with Contextual Information
    Vultureanu-Albisi, Alexandra
    Badica, Costin
    2021 25TH INTERNATIONAL CONFERENCE ON SYSTEM THEORY, CONTROL AND COMPUTING (ICSTCC), 2021, : 701 - 706
  • [32] Graph contrastive learning with multiple information fusion
    Wang, Xiaobao
    Yang, Jun
    Wang, Zhiqiang
    He, Dongxiao
    Zhao, Jitao
    Huang, Yuxiao
    Jin, Di
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 268
  • [33] Fusion and Discrimination: A Multimodal Graph Contrastive Learning Framework for Multimodal Sarcasm Detection
    Liang, Bin
    Gui, Lin
    He, Yulan
    Cambria, Erik
    Xu, Ruifeng
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (04) : 1874 - 1888
  • [34] Adversarial Graph Contrastive Learning with Information Regularization
    Feng, Shengyu
    Jing, Baoyu
    Zhu, Yada
    Tong, Hanghang
    PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 1362 - 1371
  • [35] KGCFRec: Improving Collaborative Filtering Recommendation with Knowledge Graph
    Peng, Jiquan
    Gong, Jibing
    Zhou, Chao
    Zang, Qian
    Fang, Xiaohan
    Yang, Kailun
    Yu, Jing
    ELECTRONICS, 2024, 13 (10)
  • [36] Hypergraph contrastive learning for recommendation with side information
    Ao, Dun
    Cao, Qian
    Wang, Xiaofeng
    INTERNATIONAL JOURNAL OF INTELLIGENT COMPUTING AND CYBERNETICS, 2024, 17 (04) : 657 - 670
  • [37] Graph attention contrastive learning with missing modality for multimodal recommendation
    Zhao, Wenqian
    Yang, Kai
    Ding, Peijin
    Na, Ce
    Li, Wen
    KNOWLEDGE-BASED SYSTEMS, 2025, 311
  • [38] Multimodal Graph Contrastive Learning for Multimedia-Based Recommendation
    Liu, Kang
    Xue, Feng
    Guo, Dan
    Sun, Peijie
    Qian, Shengsheng
    Hong, Richang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 9343 - 9355
  • [39] Self-supervised contrastive learning for implicit collaborative filtering
    Song, Shipeng
    Liu, Bin
    Teng, Fei
    Li, Tianrui
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 139
  • [40] Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative Filtering
    Sun, Peijie
    Wu, Le
    Zhang, Kun
    Chen, Xiangzhi
    Wang, Meng
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (05) : 2069 - 2081