High probability guarantees for stochastic convex optimization

被引:0
|
作者
Davis, Damek [1 ]
Drusvyatskiy, Dmitriy [2 ]
机构
[1] Cornell Univ, Sch ORIE, Ithaca, NY 14850 USA
[2] Univ Washington, Dept Math, Seattle, WA 98195 USA
来源
关键词
Proximal point; robust distance estimation; stochastic approximation; empirical risk; APPROXIMATION ALGORITHMS; COMPOSITE OPTIMIZATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. More nuanced high probability guarantees are rare, and typically either rely on "light-tail" noise assumptions or exhibit worse sample complexity. In this work, we show that a wide class of stochastic optimization algorithms for strongly convex problems can be augmented with high confidence bounds at an overhead cost that is only logarithmic in the confidence level and polylogarithmic in the condition number. The procedure we propose, called proxBoost, is elementary and builds on two well-known ingredients: robust distance estimation and the proximal point method. We discuss consequences for both streaming (online) algorithms and offline algorithms based on empirical risk minimization.
引用
收藏
页数:17
相关论文
共 50 条
  • [11] High-Dimensional Nonconvex Stochastic Optimization by Doubly Stochastic Successive Convex Approximation
    Mokhtari, Aryan
    Koppel, Alec
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 : 6287 - 6302
  • [12] Randomized Online Algorithms with High Probability Guarantees
    Komm, Dennis
    Kralovic, Rastislav
    Kralovic, Richard
    Moemke, Tobias
    31ST INTERNATIONAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE (STACS 2014), 2014, 25 : 470 - 481
  • [13] Randomized Online Computation with High Probability Guarantees
    Dennis Komm
    Rastislav Královič
    Richard Královič
    Tobias Mömke
    Algorithmica, 2022, 84 : 1357 - 1384
  • [14] Randomized Online Computation with High Probability Guarantees
    Komm, Dennis
    Kralovic, Rastislav
    Kralovic, Richard
    Momke, Tobias
    ALGORITHMICA, 2022, 84 (05) : 1357 - 1384
  • [15] Scalable Distributional Robustness in a Class of Non Convex Optimization with Guarantees
    Bose, Avinandan
    Sinha, Arunesh
    Mai, Tien
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [16] Stochastic Successive Convex Approximation for Non-Convex Constrained Stochastic Optimization
    Liu, An
    Lau, Vincent K. N.
    Kananian, Borna
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (16) : 4189 - 4203
  • [17] Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization
    Zhang, Liang
    Yang, Junchi
    Karbasi, Amin
    He, Niao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [18] Online Convex Optimization with Stochastic Constraints
    Yu, Hao
    Neely, Michael J.
    Wei, Xiaohan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [19] STOCHASTIC CONVEX OPTIMIZATION WITH BANDIT FEEDBACK
    Agarwal, Alekh
    Foster, Dean P.
    Hsu, Daniel
    Kakade, Sham M.
    Rakhlin, Alexander
    SIAM JOURNAL ON OPTIMIZATION, 2013, 23 (01) : 213 - 240
  • [20] Dual Solutions in Convex Stochastic Optimization
    Pennanen, Teemu
    Perkkioe, Ari-Pekka
    MATHEMATICS OF OPERATIONS RESEARCH, 2024,