Learning Compact Compositional Fmbeddings via Regularized Pruning for Recommendation

被引:2
|
作者
Liang, Xurong [1 ]
Chen, Tong [1 ]
Quoc Viet Hung Nguyen [2 ]
Li, Jianxin [3 ]
Yin, Hongzhi [1 ]
机构
[1] Univ Queensland, Brisbane, Qld, Australia
[2] Griffith Univ, Nathan, Qld, Australia
[3] Deakin Univ, Geelong, Vic, Australia
基金
澳大利亚研究理事会;
关键词
lightweight recommender systems; compositional embeddings; regularized pruning;
D O I
10.1109/ICDM58522.2023.00047
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Latent factor models arc the dominant backbones of contemporary recommender systems (RSs) given their performance advantages, where a unique vector embedding with a fixed dimensionality (e.g., 128) is required to represent each entity (commonly a user/item). Due to the large number of users and items on e -commerce sites, the embedding table is arguably the least memory -efficient component of RSs. For any lightweight recommender that aims to efficiently scale with the growing size of users/items or to remain applicable in resourceconstrained settings, existing solutions either reduce the number of embeddings needed via hashing, or sparsify the full embedding table to switch off selected embedding dimensions. However, as hash collision arises or embeddings become overly sparse, especially when adapting to a tighter memory budget, those lightweight reconunenders inevitably have to compromise their accuracy. To this end, we propose a novel compact embedding framework for RSs, namely Compositional Embedding with Regularized Pruning (CERP). Specifically, CERP represents each entity by combining a pair of embeddings from two independent, substantially smaller meta -embedding tables, which are then jointly pruned via a learnable element -wise threshold. In addition, we innovatively design a regularized pruning mechanism in CERP, such that the two sparsilied meta -embedding tables are encouraged to encode information that is mutually complementary. Given the compatibility with agnostic latent factor models, we pair CERP with two popular recommendation models for extensive experiments, where results on two realworld datasets under different memory budgets demonstrate its superiority against state-of-the-art baselines. The codebase of CERP is available in https://github.com/xurong-liang/CERP.
引用
收藏
页码:378 / 387
页数:10
相关论文
共 50 条
  • [31] Learning Monolingual Compositional Representations via Bilingual Supervision
    Elgohary, Ahmed
    Carpuat, Marine
    PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2016), VOL 2, 2016, : 362 - 368
  • [32] Learning Geographical Hierarchy Features via a Compositional Model
    Zhang, Xiaoming
    Hu, Xia
    Wang, Senzhang
    Yang, Yang
    Li, Zhoujun
    Zhou, Jianshe
    IEEE TRANSACTIONS ON MULTIMEDIA, 2016, 18 (09) : 1855 - 1868
  • [33] Learning Compositional Rules via Neural Program Synthesis
    Nye, Maxwell I.
    Solar-Lezama, Armando
    Tenenbaum, Joshua B.
    Lake, Brenden M.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [34] Learning compositional functions via multiplicative weight updates
    Bernstein, Jeremy
    Zhao, Jiawei
    Meister, Markus
    Liu, Ming-Yu
    Anandkumar, Anima
    Yue, Yisong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [35] Pruning via dynamic adaptation of the forgetting rate in structural learning
    Miller, DA
    Zurada, JM
    Lilly, JH
    ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 448 - 452
  • [36] Channel Pruning via Lookahead Search Guided Reinforcement Learning
    Wang, Zi
    Li, Chengcheng
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 3513 - 3524
  • [37] Training extreme learning machine via regularized correntropy criterion
    Xing, Hong-Jie
    Wang, Xin-Mei
    NEURAL COMPUTING & APPLICATIONS, 2013, 23 (7-8): : 1977 - 1986
  • [38] Training extreme learning machine via regularized correntropy criterion
    Hong-Jie Xing
    Xin-Mei Wang
    Neural Computing and Applications, 2013, 23 : 1977 - 1986
  • [39] Estimates of learning rates of regularized regression via polyline functions
    Cao, Feilong
    Lee, Joonwhoan
    Zhang, Yongquan
    MATHEMATICAL METHODS IN THE APPLIED SCIENCES, 2012, 35 (02) : 174 - 181
  • [40] DETECTING ANOMALY IN CHEMICAL SENSORS VIA REGULARIZED CONTRASTIVE LEARNING
    Badawi, Diaa
    Bassi, Ishaan
    Ozev, Sule
    Cetin, Ahmet Enis
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 86 - 90