AIRec: Attentive intersection model for tag-aware recommendation

被引:0
|
作者
Chen B. [1 ]
Ding Y. [1 ]
Xin X. [2 ]
Li Y. [1 ]
Wang Y. [1 ]
Wang D. [1 ]
机构
[1] School of Software, Shanghai Jiao Tong University, Shanghai
[2] School of Computing Science, University of Glasgow, Glasgow
来源
Neurocomputing | 2021年 / 421卷
关键词
Attention mechanism; Neural networks; Tag-aware collaborative filtering;
D O I
10.1016/j.neucom.2020.08.018
中图分类号
学科分类号
摘要
Tag-aware recommender systems (TRS) utilize rich tagging information to better depict user portraits and item features. Recently, many efforts have been done to improve TRS with neural networks. However, existing methods construct user representations through either explicit tagging behaviors or implicit interacted items, which is inadequate to capture multi-aspect user preferences. Besides, there are still lacks of investigation about the intersection between user and item tags, which is crucial for better recommendation. In this paper, we propose AIRec, an attentive intersection model for TRS, to address the above issues. More precisely, we first project the sparse tag vectors into a latent space through multi-layer perceptron (MLP). Then, the user representations are constructed with a hierarchical attention network, where the item-level attention differentiates the contributions of interacted items and the preference-level attention discriminates the saliencies between explicit and implicit preferences. After that, the intersection between user and item tags is exploited to enhance the learning of conjunct features. Finally, the user and item representations are concatenated and fed to factorization machines (FM) for score prediction. We conduct extensive experiments on two real-world datasets, demonstrating significant improvements of AIRec over state-of-the-art methods for tag-aware top-n recommendation. © 2020 Elsevier B.V.
引用
收藏
页码:105 / 114
页数:9
相关论文
共 50 条
  • [31] An Attribute-aware Neural Attentive Model for Next Basket Recommendation
    Bai, Ting
    Nie, Jian-Yun
    Zhao, Wayne Xin
    Zhu, Yutao
    Du, Pan
    Wen, Ji-Rong
    ACM/SIGIR PROCEEDINGS 2018, 2018, : 1201 - 1204
  • [32] A deep learning based trust- and tag-aware recommender system
    Ahmadian, Sajad
    Ahmadian, Milad
    Jalili, Mahdi
    NEUROCOMPUTING, 2022, 488 : 557 - 571
  • [33] Tag-Aware Recommender Systems: A State-of-the-Art Survey
    Zi-Ke Zhang
    Tao Zhou
    Yi-Cheng Zhang
    Journal of Computer Science and Technology, 2011, 26 : 767 - 777
  • [34] Tag-Aware Recommender System Based on Deep Reinforcement Learning
    Zhao, Zhiruo
    Chen, Xiliang
    Xu, Zhixiong
    Cao, Lei
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2021, 2021
  • [35] Exploiting relational tag expansion for dynamic user profile in a tag-aware ranking recommender system
    Pan, Yinghui
    Huo, Yongfeng
    Tang, Jing
    Zeng, Yifeng
    Chen, Bilian
    INFORMATION SCIENCES, 2021, 545 : 448 - 464
  • [36] TAG-AWARE IMAGE CLASSIFICATION VIA NESTED DEEP BELIEF NETS
    Yuan, Zhaoquan
    Sang, Jitao
    Xu, Changsheng
    2013 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME 2013), 2013,
  • [37] Attribute-aware deep attentive recommendation
    Xiaoxin Sun
    Lisa Zhang
    Yuling Wang
    Mengying Yu
    Minghao Yin
    Bangzuo Zhang
    The Journal of Supercomputing, 2021, 77 : 5510 - 5527
  • [38] TRAVEL: Tag-Aware Conversational FAQ Retrieval via Reinforcement Learning
    Chen, Yue
    Jin, Dingnan
    Huang, Chen
    Liu, Jia
    Lei, Wenqiang
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3861 - 3872
  • [39] Attribute-aware deep attentive recommendation
    Sun, Xiaoxin
    Zhang, Lisa
    Wang, Yuling
    Yu, Mengying
    Yin, Minghao
    Zhang, Bangzuo
    JOURNAL OF SUPERCOMPUTING, 2021, 77 (06): : 5510 - 5527
  • [40] Attentive Aspect Modeling for Review-Aware Recommendation
    Guan, Xinyu
    Cheng, Zhiyong
    He, Xiangnan
    Zhang, Yongfeng
    Zhu, Zhibo
    Peng, Qinke
    Chua, Tat-Seng
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2019, 37 (03)