Exploration-exploitation tradeoff using variance estimates in multi-armed bandits

被引:296
|
作者
Audibert, Jean-Yves [1 ,2 ]
Munos, Remi [3 ]
Szepesvari, Csaba [4 ]
机构
[1] Univ Paris Est, Ecole Ponts ParisTech, CERTIS, F-77455 Marne La Vallee, France
[2] Willow ENS INRIA, F-75005 Paris, France
[3] INRIA Lille Nord Europe, SequeL Project, F-59650 Villeneuve Dascq, France
[4] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2E8, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Exploration-exploitation tradeoff; Multi-armed bandits; Bernstein's inequality; High-probability bound; Risk analysis;
D O I
10.1016/j.tcs.2009.01.016
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Algorithms based on upper confidence bounds for balancing exploration and exploitation are gaining popularity since they are easy to implement, efficient and effective. This paper considers a variant of the basic algorithm for the stochastic, multi-armed bandit problem that takes into account the empirical variance of the different arms. In earlier experimental works, Such algorithms were found to outperform the competing algorithms. We provide the first analysis of the expected regret for such algorithms. As expected, our results show that the algorithm that uses the variance estimates has a major advantage over its alternatives that do not use Such estimates provided that the variances of the payoffs of the suboptimal arms are low. We also prove that the regret concentrates only at a polynomial rate. This holds for all the upper confidence bound based algorithms and for all bandit problems except those special ones where with probability one the payoff obtained by pulling the optimal arm is larger than the expected payoff for the second best arm. Hence, although upper confidence bound bandit algorithms achieve logarithmic expected regret rates, they might not be Suitable for a risk-averse decision maker. We illustrate some of the results by Computer simulations. (C) 2009 Elsevier B.V. All rights reserved.
引用
收藏
页码:1876 / 1902
页数:27
相关论文
共 50 条
  • [31] Pruning neural networks using multi-armed bandits
    Ameen S.
    Vadera S.
    Computer Journal, 2020, 63 (07): : 1099 - 1108
  • [32] Optima Query Selection Using Multi-Armed Bandits
    Kocanaogullari, Aziz
    Marghi, Yeganeh M.
    Akcakaya, Murat
    Erdogmus, Deniz
    IEEE SIGNAL PROCESSING LETTERS, 2018, 25 (12) : 1870 - 1874
  • [33] Fast Beam Alignment via Pure Exploration in Multi-Armed Bandits
    Wei, Yi
    Zhong, Zixin
    Tan, Vincent Y. F.
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (05) : 3264 - 3279
  • [34] Pure Exploration of Multi-Armed Bandits with Heavy-Tailed Payoffs
    Yu, Xiaotian
    Shao, Han
    Lyu, Michael R.
    King, Irwin
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2018, : 937 - 946
  • [35] Diversity-Driven Selection of Exploration Strategies in Multi-Armed Bandits
    Benureau, Fabien
    Oudeyer, Pierre-Yves
    5TH INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND ON EPIGENETIC ROBOTICS (ICDL-EPIROB), 2015, : 135 - 142
  • [36] Pruning Neural Networks Using Multi-Armed Bandits
    Ameen, Salem
    Vadera, Sunil
    COMPUTER JOURNAL, 2020, 63 (07): : 1099 - 1108
  • [37] Multi-armed bandits for performance marketing
    Gigli, Marco
    Stella, Fabio
    INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS, 2024,
  • [38] Lenient Regret for Multi-Armed Bandits
    Merlis, Nadav
    Mannor, Shie
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8950 - 8957
  • [39] Finding structure in multi-armed bandits
    Schulz, Eric
    Franklin, Nicholas T.
    Gershman, Samuel J.
    COGNITIVE PSYCHOLOGY, 2020, 119
  • [40] ON MULTI-ARMED BANDITS AND DEBT COLLECTION
    Czekaj, Lukasz
    Biegus, Tomasz
    Kitlowski, Robert
    Tomasik, Pawel
    36TH ANNUAL EUROPEAN SIMULATION AND MODELLING CONFERENCE, ESM 2022, 2022, : 137 - 141