Knowledge distillation meets recommendation: collaborative distillation for top-N recommendation

被引:2
|
作者
Lee, Jae-woong [1 ]
Choi, Minjin [2 ]
Sael, Lee [3 ,4 ]
Shim, Hyunjung [5 ]
Lee, Jongwuk [2 ]
机构
[1] Sungkyunkwan Univ, Seoul, South Korea
[2] Sungkyunkwan Univ, Dept Comp Sci & Engn, Seoul, South Korea
[3] Ajou Univ, Dept Software & Comp Engn, Dept Artificial Intelligence, Seoul, South Korea
[4] Ajou Univ, Dept Convergence Healthcare Med, Seoul, South Korea
[5] Yonsei Univ, Sch Integrated Technol, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Knowledge distillation; Top-N recommendation; Collaborative filtering; Data sparsity; Data ambiguity;
D O I
10.1007/s10115-022-01667-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge distillation (KD) is a successful method for transferring knowledge from one model (i.e., teacher model) to another model (i.e., student model). Despite the success of KD in classification tasks, applying KD to recommender models is challenging because of the sparsity of positive feedback, ambiguity of missing feedback, and ranking problem for top-N recommendation. In this paper, we propose a new KD model for collaborative filtering, namely collaborative distillation (CD). Specifically, (1) we reformulate a loss function to deal with the ambiguity of missing feedback. (2) We exploit probabilistic rank-aware sampling for top-N recommendation. (3) To train the proposed model effectively, we develop two training strategies for the student model, called teacher- and student-guided training methods, adaptively selecting the most beneficial feedback from the teacher model. Furthermore, we extend our model using self-distillation, called born-again CD (BACD). That is, the teacher and student models with the same model capacity are trained by using the proposed distillation method. The experimental results demonstrate that CD outperforms the state-of-the-art method by 2.7-33.2% and 2.7-29.9% in hit rate (HR) and normalized discounted cumulative gain (NDCG), respectively. Moreover, BACD improves the teacher model by 3.5-12.0% and 4.9-13.3% in HR and NDCG, respectively.
引用
收藏
页码:1323 / 1348
页数:26
相关论文
共 50 条
  • [21] Error-based collaborative filtering algorithm for top-N recommendation
    Kim, Heung-Nam
    Ji, Ae-Ttie
    Kim, Hyun-Jun
    Jo, Geun-Sik
    ADVANCES IN DATA AND WEB MANAGEMENT, PROCEEDINGS, 2007, 4505 : 594 - +
  • [22] Gated Knowledge Graph Neural Networks for Top-N Recommendation System
    Mu, Nan
    Zha, Daren
    Gong, Rui
    PROCEEDINGS OF THE 2021 IEEE 24TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN (CSCWD), 2021, : 1111 - 1116
  • [23] Assessing ranking metrics in top-N recommendation
    Daniel Valcarce
    Alejandro Bellogín
    Javier Parapar
    Pablo Castells
    Information Retrieval Journal, 2020, 23 : 411 - 448
  • [24] Towards a Dynamic Top-N Recommendation Framework
    Liu, Xin
    Aberer, Karl
    PROCEEDINGS OF THE 8TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'14), 2014, : 217 - 224
  • [25] Top-N Recommendation Model Based on SDAE
    Bao, Rui
    Sun, Yipin
    2018 INTERNATIONAL CONFERENCE ON COMPUTER INFORMATION SCIENCE AND APPLICATION TECHNOLOGY, 2019, 1168
  • [26] Assessing ranking metrics in top-N recommendation
    Valcarce, Daniel
    Bellogin, Alejandro
    Parapar, Javier
    Castells, Pablo
    INFORMATION RETRIEVAL JOURNAL, 2020, 23 (04): : 411 - 448
  • [27] Reinforcement Learning to Diversify Top-N Recommendation
    Zou, Lixin
    Xia, Long
    Ding, Zhuoye
    Yin, Dawei
    Song, Jiaxing
    Liu, Weidong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2019), PT II, 2019, 11447 : 104 - 120
  • [28] A Poisson Regression Method for Top-N Recommendation
    Huang, Jiajin
    Wang, Jian
    Zhong, Ning
    SIGIR'17: PROCEEDINGS OF THE 40TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2017, : 885 - 888
  • [29] Logic Tensor Networks for Top-N Recommendation
    Carraro, Tommaso
    Daniele, Alessandro
    Aiolli, Fabio
    Serafini, Luciano
    AIXIA 2022 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2023, 13796 : 110 - 123
  • [30] Logic Tensor Networks for Top-N Recommendation
    Carraro, Tommaso
    Daniele, Alessandro
    Aiolli, Fabio
    Serafini, Luciano
    NEURAL-SYMBOLIC LEARNING AND REASONING, NESY 2022, 2022, : 1 - 14