Approximately Stationary Bandits with Knapsacks

被引:0
|
作者
Fikioris, Giannis [1 ]
Tardos, Eva [1 ]
机构
[1] Cornell Univ, Ithaca, NY 14853 USA
关键词
Bandits with Knapsacks; Best of both worlds; Adversarial; Approximately Stationary;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Bandits with Knapsacks (BwK), the generalization of the Multi-Armed Bandits problem under global budget constraints, has received a lot of attention in recent years. It has numerous applications, including dynamic pricing, repeated auctions, ad allocation, network scheduling, etc. Previous work has focused on one of the two extremes: Stochastic BwK where the rewards and consumptions of the resources of each round are sampled from an i.i.d. distribution, and Adversarial BwK where these parameters are picked by an adversary. Achievable guarantees in the two cases exhibit a massive gap: No-regret learning is achievable in the stochastic case, but in the adversarial case only competitive ratio style guarantees are achievable, where the competitive ratio depends either on the budget or on both the time and the number of resources. What makes this gap so vast is that in Adversarial BwK the guarantees get worse in the typical case when the budget is more binding. While "best-of-both-worlds" type algorithms are known (single algorithms that provide the best achievable guarantee in each extreme case), their bounds degrade to the adversarial case as soon as the environment is not fully stochastic. Our work aims to bridge this gap, offering guarantees for a workload that is not exactly stochastic but is also not worst-case. We define a condition, Approximately Stationary BwK, that parameterizes how close to stochastic or adversarial an instance is. Based on these parameters, we explore what is the best competitive ratio attainable in BwK. We explore two algorithms that are oblivious to the values of the parameters but guarantee competitive ratios that smoothly transition between the best possible guarantees in the two extreme cases, depending on the values of the parameters. Our guarantees offer great improvement over the adversarial guarantee, especially when the available budget is small. We also prove bounds on the achievable guarantee, showing that our results are approximately tight when the budget is small.
引用
收藏
页数:25
相关论文
共 50 条
  • [21] Non-stationary Bandits with Heavy Tail
    Pan, Weici
    Liu, Zhenhua
    Performance Evaluation Review, 2024, 52 (02): : 33 - 35
  • [22] Unifying Clustered and Non-stationary Bandits
    Li, Chuanhao
    Wu, Qingyun
    Wang, Hongning
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [23] Piecewise Stationary Bandits under Risk Criteria
    Bhatt, Sujay
    Fang, Guanhua
    Li, Ping
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [24] Weighted Linear Bandits for Non-Stationary Environments
    Russac, Yoan
    Vernade, Claire
    Cappe, Olivier
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [25] Competing Bandits in Non-Stationary Matching Markets
    Ghosh, Avishek
    Sankararaman, Abishek
    Ramchandran, Kannan
    Javidi, Tara
    Mazumdar, Arya
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2024, 70 (04) : 2831 - 2850
  • [26] A Simple Approach for Non-stationary Linear Bandits
    Zhao, Peng
    Zhang, Lijun
    Jiang, Yuan
    Zhou, Zhi-Hua
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 746 - 754
  • [27] On the Origins of the State: Stationary Bandits and Taxation in Eastern Congo
    de la Sierra, Raul Sanchez
    JOURNAL OF POLITICAL ECONOMY, 2020, 128 (01) : 32 - 74
  • [28] Learning Contextual Bandits in a Non-stationary Environment
    Wu, Qingyun
    Iyer, Naveen
    Wang, Hongning
    ACM/SIGIR PROCEEDINGS 2018, 2018, : 495 - 504
  • [29] EEG spectrum analysis for extraction of approximately stationary features
    Poulos, M
    Rangoussi, M
    Chrissikopoulos, V
    Evangelou, A
    Scattering and Biomedical Engineering: Modeling and Applications, 2002, : 382 - 394
  • [30] Weighted Gaussian Process Bandits for Non-stationary Environments
    Deng, Yuntian
    Zhou, Xingyu
    Kim, Baekjin
    Tewari, Ambuj
    Gupta, Abhishek
    Shroff, Ness
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151