An Optimal Algorithm for Bandit Convex Optimization with Strongly-Convex and Smooth Loss

被引:0
|
作者
Ito, Shinji [1 ]
机构
[1] Univ Tokyo, NEC Corp, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider non-stochastic bandit convex optimization with strongly-convex and smooth loss functions. For this problem, Hazan and Levy have proposed an algorithm with a regret bound of (O) over tilde (d(3/2) root T) given access to an O(d)-self-concordant barrier over the feasible region, where d and T stand for the dimensionality of the feasible region and the number of rounds, respectively. However, there are no known efficient ways for constructing self-concordant barriers for general convex sets, and a (O) over tilde(root d) gap has remained between the upper and lower bounds, as the known regret lower bound is Omega(d root T). Our study resolves these two issues by introducing an algorithm that achieves an optimal regret bound of (O) over tilde (d root T) under a mild assumption, without self-concordant barriers. More precisely, the algorithm requires only a membership oracle for the feasible region, and it achieves an optimal regret bound of (O) over tilde (d root T) under the assumption that the optimal solution is an interior of the feasible region. Even without this assumption, our algorithm achieves (O) over tilde (d(3/2) root T)-regret.
引用
收藏
页码:2229 / 2238
页数:10
相关论文
共 50 条
  • [1] Beyond the regret minimization barrier: Optimal algorithms for stochastic strongly-convex optimization
    Hazan, Elad
    Kale, Satyen
    Journal of Machine Learning Research, 2014, 15 : 2489 - 2512
  • [2] The First Optimal Algorithm for Smooth and Strongly-Convex-Strongly-Concave Minimax Optimization
    Kovalev, Dmitry
    Gasnikov, Alexander
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [3] Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly-Convex Optimization
    Hazan, Elad
    Kale, Satyen
    JOURNAL OF MACHINE LEARNING RESEARCH, 2014, 15 : 2489 - 2512
  • [4] Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks
    Scaman, Kevin
    Bach, Francis
    Bubeck, Sebastien
    Lee, Yin Tat
    Massoulie, Laurent
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [5] Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization
    Kovalev, Dmitry
    Salim, Adil
    Richtarik, Peter
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [6] Projection-Free Bandit Convex Optimization over Strongly Convex Sets
    Zhang, Chenxu
    Wang, Yibo
    Tian, Peng
    Cheng, Xiao
    Wan, Yuanyu
    Song, Mingli
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PT III, PAKDD 2024, 2024, 14647 : 118 - 129
  • [7] Optimal regret algorithm for Pseudo-1d Bandit Convex Optimization
    Saha, Aadirupa
    Natarajan, Nagarajan
    Netrapalli, Praneeth
    Jain, Prateek
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [8] Optimal Tensor Methods in Smooth Convex and Uniformly Convex Optimization
    Gasnikov, Alexander
    Dvurechensky, Pavel
    Gorbunov, Eduard
    Vorontsova, Evgeniya
    Selikhanovych, Daniil
    Uribe, Cesar A.
    CONFERENCE ON LEARNING THEORY, VOL 99, 2019, 99
  • [9] Stochastic algorithm with optimal convergence rate for strongly convex optimization problems
    Shao, Yan-Jian, 1600, Chinese Academy of Sciences (25):
  • [10] Optimistic Bandit Convex Optimization
    Mohri, Mehryar
    Yang, Scott
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29