Scalable Neural Contextual Bandit for Recommender Systems

被引:0
|
作者
Zhu, Zheqing [1 ]
Van Roy, Benjamin [2 ]
机构
[1] Stanford Unvers, Meta AI, Menlo Pk, CA 94025 USA
[2] Stanford Univ, Stanford, CA USA
关键词
Recommender Systems; Contextual Bandits; Reinforcement Learning; Exploration vs Exploitation; Decision Making under Uncertainty; MODEL;
D O I
10.1145/3583780.3615048
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
High-quality recommender systems ought to deliver both innovative and relevant content through effective and exploratory interactions with users. Yet, supervised learning-based neural networks, which form the backbone of many existing recommender systems, only leverage recognized user interests, falling short when it comes to efficiently uncovering unknown user preferences. While there has been some progress with neural contextual bandit algorithms towards enabling online exploration through neural networks, their onerous computational demands hinder widespread adoption in real-world recommender systems. In this work, we propose a scalable sample-efficient neural contextual bandit algorithm for recommender systems. To do this, we design an epistemic neural network architecture, Epistemic Neural Recommendation (ENR), that enables Thompson sampling at a large scale. In two distinct large-scale experiments with real-world tasks, ENR significantly boosts click-through rates and user ratings by at least 9% and 6% respectively compared to state-of-the-art neural contextual bandit algorithms. Furthermore, it achieves equivalent performance with at least 29% fewer user interactions compared to the best-performing baseline algorithm. Remarkably, while accomplishing these improvements, ENR demands orders of magnitude fewer computational resources than neural contextual bandit baseline algorithms.
引用
收藏
页码:3636 / 3646
页数:11
相关论文
共 50 条
  • [1] Contextual Meta-Bandit for Recommender Systems Selection
    Santana, Marlesson R. O.
    Melo, Luckeciano C.
    Camargo, Fernando H. F.
    Brandao, Bruno
    Soares, Anderson
    Oliveira, Renan M.
    Caetano, Sandor
    RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, : 444 - 449
  • [2] Contextual Bandit Algorithm for Risk-Aware Recommender Systems
    Bouneffouf, Djallel
    2016 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2016, : 4667 - 4674
  • [3] Bandit Algorithms in Recommender Systems
    Glowacka, Dorota
    RECSYS 2019: 13TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2019, : 574 - 575
  • [4] A neural networks committee for the contextual bandit problem
    Allesiardo, Robin, 1600, Springer Verlag (8834):
  • [5] A Neural Networks Committee for the Contextual Bandit Problem
    Allesiardo, Robin
    Feraud, Raphael
    Bouneffouf, Djallel
    NEURAL INFORMATION PROCESSING (ICONIP 2014), PT I, 2014, 8834 : 374 - 381
  • [6] Contextual Operation for Recommender Systems
    Wu, Shu
    Liu, Qiang
    Wang, Liang
    Tan, Tieniu
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2016, 28 (08) : 2000 - 2012
  • [7] A neural circuit model for a contextual association task inspired by recommender systems
    Zhu, Henghui
    Paschalidis, Ioannis Ch
    Chang, Allen
    Stern, Chantal E.
    Hasselmo, Michael E.
    HIPPOCAMPUS, 2020, 30 (04) : 384 - 395
  • [8] A Contextual-Bandit Algorithm for Mobile Context-Aware Recommender System
    Bouneffouf, Djallel
    Bouzeghoub, Amel
    Gancarski, Alda Lopes
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT III, 2012, 7665 : 324 - 331
  • [9] Modelling Contextual Information in Session-Aware Recommender Systems with Neural Networks
    Twardowski, Bartlomiej
    PROCEEDINGS OF THE 10TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'16), 2016, : 273 - 276
  • [10] Large-Scale Bandit Approaches for Recommender Systems
    Zhou, Qian
    Zhang, XiaoFang
    Xu, Jin
    Liang, Bin
    NEURAL INFORMATION PROCESSING, ICONIP 2017, PT I, 2017, 10634 : 811 - 821