Dual-view hypergraph attention network for news recommendation

被引:1
|
作者
Liu, Wenxuan [1 ]
Zhang, Zizhuo [1 ]
Wang, Bang [1 ,2 ]
机构
[1] Huazhong Univ Sci & Technol HUST, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[2] Huazhong Univ Sci & Technol, Hubei Key Lab Smart Internet Technol, Wuhan 430074, Peoples R China
基金
中国国家自然科学基金;
关键词
Recommender system; News recommendation; Hypergraph neural network;
D O I
10.1016/j.engappai.2024.108256
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
News Recommendation (NR) helps users to quickly find their mostly interested information. Recently, some NR systems based on graph neural networks has achieved significant performance improvement in that they use a graph structure to pair -wisely link two nodes (e.g., user, news, topic node) for representation learning. However, such kind of pairwise edges may be not enough to describe multifaceted relations involving more than two nodes; While a hypergraph structure with one hyperedge connecting two and more nodes might be a better choice. Considering this, we propose a dual -view hypergraph attention network for news recommendations (Hyper4NR) in this paper. In particular, we design a dual -view hypergraph structure to model users' click history, containing both topic -view hyperedges and semantic -view hyperedges. On the constructed hypergraph, we use hyperedge-specific attention network (HSAN) to pass messages in between hyperedges and nodes to encode their representations based on a self -supervised learning approach. Furthermore, we construct another kind of candidate hypergraph on which we apply the HyperGAT to obtain enhanced candidate news encoding. Extensive experiments on the widely used MIND news dataset and Adressa dataset show that our Hyper4NR outperforms these state-of-the-art NR methods.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Dynamic News Recommendation with Hierarchical Attention Network
    Zhang, Hui
    Chen, Xu
    Ma, Shuai
    2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, : 1456 - 1461
  • [22] MVC-HGAT: multi-view contrastive hypergraph attention network for session-based recommendation
    Yang, Fan
    Peng, Dunlu
    APPLIED INTELLIGENCE, 2025, 55 (01)
  • [23] Dual-view Attention Networks for Single Image Super-Resolution
    Guo, Jingcai
    Ma, Shiheng
    Zhang, Jie
    Zhou, Qihua
    Guo, Song
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 2728 - 2736
  • [24] Dual-view co-contrastive learning for multi-behavior recommendation
    Qingfeng Li
    Huifang Ma
    Ruoyi Zhang
    Wangyu Jin
    Zhixin Li
    Applied Intelligence, 2023, 53 : 20134 - 20151
  • [25] Context-embedded hypergraph attention network and self-attention for session recommendation
    Zhang, Zhigao
    Zhang, Hongmei
    Zhang, Zhifeng
    Wang, Bin
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [26] Dual-view co-contrastive learning for multi-behavior recommendation
    Li, Qingfeng
    Ma, Huifang
    Zhang, Ruoyi
    Jin, Wangyu
    Li, Zhixin
    APPLIED INTELLIGENCE, 2023, 53 (17) : 20134 - 20151
  • [27] Dual-View Whitening on Pre-trained Text Embeddings for Sequential Recommendation
    Zhang, Lingzi
    Zhou, Xin
    Zeng, Zhiwei
    Shen, Zhiqi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8, 2024, : 9332 - 9340
  • [28] Attention and Memory-Augmented Networks for Dual-View Sequential Learning
    He, Yong
    Wang, Cheng
    Li, Nan
    Zeng, Zhenyu
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 125 - 134
  • [29] Hypergraph modeling and hypergraph multi-view attention neural network for link prediction
    Chai, Lang
    Tu, Lilan
    Wang, Xianjia
    Su, Qingqing
    PATTERN RECOGNITION, 2024, 149
  • [30] A hierarchical dual-view model for fake news detection guided by discriminative lexicons
    Yang, Sijia
    Li, Xianyong
    Du, Yajun
    Huang, Dong
    Chen, Xiaoliang
    Fan, Yongquan
    Wang, Shumin
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025, 16 (02) : 1071 - 1090