Multi-Objective Generalized Linear Bandits

被引:0
|
作者
Lu, Shiyin [1 ]
Wang, Guanghui [1 ]
Hu, Yao [2 ]
Zhang, Lijun [1 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing 210023, Peoples R China
[2] Alibaba Grp, YouKu Cognit & Intelligent Lab, Beijing 100102, Peoples R China
基金
国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we study the multi-objective bandits (MOB) problem, where a learner repeatedly selects one arm to play and then receives a reward vector consisting of multiple objectives. MOB has found many real-world applications as varied as online recommendation and network routing. On the other hand, these applications typically contain contextual information that can guide the learning process which, however, is ignored by most of existing work. To utilize this information, we associate each arm with a context vector and assume the reward follows the generalized linear model (GLM). We adopt the notion of Pareto regret to evaluate the learner's performance and develop a novel algorithm for minimizing it. The essential idea is to apply a variant of the online Newton step to estimate model parameters, based on which we utilize the upper confidence bound (UCB) policy to construct an approximation of the Pareto front, and then uniformly at random choose one arm from the approximate Pareto front. Theoretical analysis shows that the proposed algorithm achieves an (O) over tilde (d root T) Pareto regret, where T is the time horizon and d is the dimension of contexts, which matches the optimal result for single objective contextual bandits problem. Numerical experiments demonstrate the effectiveness of our method.
引用
收藏
页码:3080 / 3086
页数:7
相关论文
共 50 条
  • [1] Multi-objective Bandits: Optimizing the Generalized Gini Index
    Busa-Fekete, Robert
    Szorenyi, Balazs
    Weng, Paul
    Mannor, Shie
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [2] Hierarchize Pareto Dominance in Multi-Objective Stochastic Linear Bandits
    Cheng, Ji
    Xue, Bo
    Yi, Jiaxiang
    Zhang, Qingfu
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 11489 - 11497
  • [3] MULTI-OBJECTIVE CONTEXTUAL BANDITS WITH A DOMINANT OBJECTIVE
    Tekin, Cem
    Turgay, Eralp
    2017 IEEE 27TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, 2017,
  • [4] Contextual Bandits for Multi-Objective Recommender Systems
    Lacerda, Anisio
    2015 BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS 2015), 2015, : 68 - 73
  • [5] Blending Controllers via Multi-Objective Bandits
    Gohari, Parham
    Djeumou, Franck
    Vinod, Abraham P.
    Topcu, Ufuk
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 88 - 95
  • [6] Multi-Objective X-Armed Bandits
    Van Moffaert, Kristof
    Van Vaerenbergh, Kevin
    Vrancx, Peter
    Nowe, Ann
    PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 2331 - 2338
  • [7] Multi-Objective Ranked Bandits for Recommender Systems
    Lacerda, Anisio
    NEUROCOMPUTING, 2017, 246 : 12 - 24
  • [8] Sequential Learning of the Pareto Front for Multi-objective Bandits
    Crepon, Elise
    Garivier, Aurelien
    Koolen, Wouter M.
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [9] PAC models in stochastic multi-objective multi-armed bandits
    Drugan, Madalina M.
    PROCEEDINGS OF THE 2017 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'17), 2017, : 409 - 416
  • [10] Designing multi-objective multi-armed bandits algorithms: a study
    Drugan, Madalina M.
    Nowe, Ann
    2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2013,