Arm Space Decomposition as a Strategy for Tackling Large Scale Multi-Armed Bandit Problems

被引:0
|
作者
Gupta, Neha [1 ]
Granmo, Ole-Christoffer [2 ]
Agrawala, Ashok [1 ]
机构
[1] Univ Maryland, College Pk, MD 20742 USA
[2] Univ Agder, Grimstad, Norway
关键词
GAMES;
D O I
10.1109/ICMLA.2013.51
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent multi-armed bandit based optimization schemes provide near-optimal balancing of arm exploration against arm exploitation, allowing the optimal arm to be identified with probability arbitrarily close to unity. However, the convergence speed drops dramatically as the number of bandit arms grows large, simply because singling out the optimal arm requires experimentation with all of the available arms. Furthermore, effective exploration and exploitation typically demands computational resources that grow linearly with the number of arms. Although the former problem can be remedied to some degree when prior knowledge about arm correlation is available, the latter problem persists. In this paper we propose a Thompson Sampling (TS) based scheme for exploring an arm space of size K by decomposing it into two separate arm spaces, each of size root K, thus achieving sub-linear scalability. In brief, two dedicated Thompson Samplers explore each arm space separately. However, at each iteration, arm selection feedback is obtained by jointly considering the arms selected by each of the Thompson Samplers, mapping them into the original arm space. This kind of decentralized decision-making can be modeled as a game theory problem, where two independent decision makers interact in terms of a common pay-off game. Our scheme requires no communication between the decision makers, who have complete autonomy over their actions. Thus it is ideal for coordinating autonomous agents in a multi-agent system. Extensive experiments, including instances possessing multiple Nash equilibria, demonstrate remarkable performance benefits. Although TS based schemes already are among the top-performing bandit players, our proposed arm space decomposition scheme provide drastic improvements for large arm spaces, not only in terms of processing speed and memory usage, but also in terms of an improved ability to identify the optimal arm, increasing with the number of bandit arms.
引用
收藏
页码:252 / 257
页数:6
相关论文
共 50 条
  • [1] A Satisficing Strategy with Variable Reference in the Multi-armed Bandit Problems
    Kohno, Yu
    Takahashi, Tatsuji
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2014 (ICNAAM-2014), 2015, 1648
  • [3] An asymptotically optimal strategy for constrained multi-armed bandit problems
    Hyeong Soo Chang
    Mathematical Methods of Operations Research, 2020, 91 : 545 - 557
  • [4] Satisficing in Multi-Armed Bandit Problems
    Reverdy, Paul
    Srivastava, Vaibhav
    Leonard, Naomi Ehrich
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (08) : 3788 - 3803
  • [5] A Multi-Armed Bandit Strategy for Countermeasure Selection
    Cochrane, Madeleine
    Hunjet, Robert
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 2510 - 2515
  • [6] Anytime Algorithms for Multi-Armed Bandit Problems
    Kleinberg, Robert
    PROCEEDINGS OF THE SEVENTHEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, 2006, : 928 - 936
  • [7] Percentile optimization in multi-armed bandit problems
    Ghatrani, Zahra
    Ghate, Archis
    ANNALS OF OPERATIONS RESEARCH, 2024, 340 (2-3) : 837 - 862
  • [8] Multi-armed Bandit Requiring Monotone Arm Sequences
    Chen, Ningyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [9] Ambiguity aversion in multi-armed bandit problems
    Anderson, Christopher M.
    THEORY AND DECISION, 2012, 72 (01) : 15 - 33
  • [10] Multi-armed Bandit Problems with Strategic Arms
    Braverman, Mark
    Mao, Jieming
    Schneider, Jon
    Weinberg, S. Matthew
    CONFERENCE ON LEARNING THEORY, VOL 99, 2019, 99