Exploiting Transfer Learning With Attention for In-Domain Top-N Recommendation

被引:0
|
作者
Chen, Ke-Jia [1 ,2 ]
Zhang, Hui [2 ]
机构
[1] Sichuan Univ, State Key Lab Hydraul & Mt River Engn, Chengdu 610065, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Sch Comp Sci, Nanjing 210023, Peoples R China
来源
IEEE ACCESS | 2019年 / 7卷
基金
中国国家自然科学基金;
关键词
Multi-behavior recommendation; transfer learning; attention; multiplex network embedding;
D O I
10.1109/ACCESS.2019.2957473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cross-domain recommendation has recently been extensively studied, aiming to alleviate the data sparsity problem. However, user-item interaction data in the source domain is often not available, while user-item interaction data of various types in the same domain is relatively easy to obtain. This paper proposes a recommendation method based on in-domain transfer learning (RiDoTA), which represents multi-type interactions of user-item as a multi-behavior network in the same domain, and can recommend target behavior by transferring knowledge from source behavior data. The method consists of three main steps: First, the node embedding is performed on each specific behavior network and a base network by using a multiplex network embedding strategy; Then, the attention mechanism is used to learn the weight distribution of embeddings from the above networks when transferring; Finally, a multi-layer perceptron is used to learn the nonlinear interaction model of the target behavior. Experiments on two real-world datasets show that our model outperforms the baseline methods and three state-of-art related methods in the HR and NDCG indicators. The implementation of RiDoTA is available at https://github.com/sandman13/RiDoTA.
引用
收藏
页码:175041 / 175050
页数:10
相关论文
共 50 条
  • [31] Efficient Learning-Based Recommendation Algorithms for Top-N Tasks and Top-N Workers in Large-Scale Crowdsourcing Systems
    Safran, Mejdl
    Che, Dunren
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2019, 37 (01)
  • [32] Knowledge distillation meets recommendation: collaborative distillation for top-N recommendation
    Lee, Jae-woong
    Choi, Minjin
    Sael, Lee
    Shim, Hyunjung
    Lee, Jongwuk
    KNOWLEDGE AND INFORMATION SYSTEMS, 2022, 64 (05) : 1323 - 1348
  • [33] Candidate Set Sampling for Evaluating Top-N Recommendation
    Ihemelandu, Ngozi
    Ekstrand, Michael D.
    2023 IEEE INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY, WI-IAT, 2023, : 88 - 94
  • [34] NCDREC: A Decomposability Inspired Framework for Top-N Recommendation
    Nikolakopoulos, Athanasios N.
    Garofalakis, John D.
    2014 IEEE/WIC/ACM INTERNATIONAL JOINT CONFERENCES ON WEB INTELLIGENCE (WI) AND INTELLIGENT AGENT TECHNOLOGIES (IAT), VOL 1, 2014, : 183 - 190
  • [35] Random walk models for top-N recommendation task
    Zhang, Yin
    Wu, Jiang-qin
    Zhuang, Yue-ting
    JOURNAL OF ZHEJIANG UNIVERSITY-SCIENCE A, 2009, 10 (07): : 927 - 936
  • [36] Top-N Recommendation based on Mutual Trust and Influence
    Seng, D. W.
    Liu, J. X.
    Zhang, X. F.
    Chen, J.
    Fang, X. J.
    INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2019, 14 (04) : 540 - 556
  • [37] Knowledge distillation meets recommendation: collaborative distillation for top-N recommendation
    Jae-woong Lee
    Minjin Choi
    Lee Sael
    Hyunjung Shim
    Jongwuk Lee
    Knowledge and Information Systems, 2022, 64 : 1323 - 1348
  • [38] Top-N recommendation algorithm integrated neural network
    Zhang, Liang
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (09): : 3881 - 3889
  • [39] Exploiting Nonlinear Relationships for Top-N Recommender Systems
    Kang, Zhao
    Peng, Chong
    Yang, Ming
    Cheng, Qiang
    2017 IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE (IEEE ICBK 2017), 2017, : 49 - 56
  • [40] Unifying Explicit and Implicit Feedback for Top-N Recommendation
    Liu, Siping
    Tu, Xiaohan
    Li, Renfa
    2017 IEEE 2ND INTERNATIONAL CONFERENCE ON BIG DATA ANALYSIS (ICBDA), 2017, : 35 - 39