Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization

被引:0
|
作者
Ahmed Khaled
Othmane Sebbouh
Nicolas Loizou
Robert M. Gower
Peter Richtárik
机构
[1] Princeton University,ENS Paris
[2] CREST-ENSAE,undefined
[3] Johns Hopkins University,undefined
[4] Flatiron Institute,undefined
[5] KAUST,undefined
关键词
Stochastic optimization; Convex optimization; Variance reduction; Composite optimization;
D O I
暂无
中图分类号
学科分类号
摘要
We present a unified theorem for the convergence analysis of stochastic gradient algorithms for minimizing a smooth and convex loss plus a convex regularizer. We do this by extending the unified analysis of Gorbunov et al. (in: AISTATS, 2020) and dropping the requirement that the loss function be strongly convex. Instead, we rely only on convexity of the loss function. Our unified analysis applies to a host of existing algorithms such as proximal SGD, variance reduced methods, quantization and some coordinate descent-type methods. For the variance reduced methods, we recover the best known convergence rates as special cases. For proximal SGD, the quantization and coordinate-type methods, we uncover new state-of-the-art convergence rates. Our analysis also includes any form of sampling or minibatching. As such, we are able to determine the minibatch size that optimizes the total complexity of variance reduced methods. We showcase this by obtaining a simple formula for the optimal minibatch size of two variance reduced methods (L-SVRG and SAGA). This optimal minibatch size not only improves the theoretical total complexity of the methods but also improves their convergence in practice, as we show in several experiments.
引用
收藏
页码:499 / 540
页数:41
相关论文
共 50 条
  • [1] Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
    Khaled, Ahmed
    Sebbouh, Othmane
    Loizou, Nicolas
    Gower, Robert M.
    Richtarik, Peter
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2023, 199 (02) : 499 - 540
  • [2] Simple Stochastic Gradient Methods for Non-Smooth Non-Convex Regularized Optimization
    Metel, Michael R.
    Takeda, Akiko
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [3] Inexact proximal stochastic gradient method for convex composite optimization
    Xiao Wang
    Shuxiong Wang
    Hongchao Zhang
    Computational Optimization and Applications, 2017, 68 : 579 - 618
  • [4] Inexact proximal stochastic gradient method for convex composite optimization
    Wang, Xiao
    Wang, Shuxiong
    Zhang, Hongchao
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2017, 68 (03) : 579 - 618
  • [5] Optimal methods for convex nested stochastic composite optimization
    Zhang, Zhe
    Lan, Guanghui
    MATHEMATICAL PROGRAMMING, 2024,
  • [6] STOCHASTIC METHODS FOR COMPOSITE AND WEAKLY CONVEX OPTIMIZATION PROBLEMS
    Duchi, John C.
    Ruan, Feng
    SIAM JOURNAL ON OPTIMIZATION, 2018, 28 (04) : 3229 - 3259
  • [7] Decentralized Gradient-Free Methods for Stochastic Non-smooth Non-convex Optimization
    Lin, Zhenwei
    Xia, Jingfan
    Deng, Qi
    Luo, Luo
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 17477 - 17486
  • [8] Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact
    Kornilov, Nikita
    Gasnikov, Alexander
    Dvurechensky, Pavel
    Dvinskikh, Darina
    COMPUTATIONAL MANAGEMENT SCIENCE, 2023, 20 (01)
  • [9] Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact
    Nikita Kornilov
    Alexander Gasnikov
    Pavel Dvurechensky
    Darina Dvinskikh
    Computational Management Science, 2023, 20
  • [10] Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
    Hanzely, Filip
    Richtarik, Peter
    Xiao, Lin
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2021, 79 (02) : 405 - 440