Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization

被引:0
|
作者
Ahmed Khaled
Othmane Sebbouh
Nicolas Loizou
Robert M. Gower
Peter Richtárik
机构
[1] Princeton University,ENS Paris
[2] CREST-ENSAE,undefined
[3] Johns Hopkins University,undefined
[4] Flatiron Institute,undefined
[5] KAUST,undefined
关键词
Stochastic optimization; Convex optimization; Variance reduction; Composite optimization;
D O I
暂无
中图分类号
学科分类号
摘要
We present a unified theorem for the convergence analysis of stochastic gradient algorithms for minimizing a smooth and convex loss plus a convex regularizer. We do this by extending the unified analysis of Gorbunov et al. (in: AISTATS, 2020) and dropping the requirement that the loss function be strongly convex. Instead, we rely only on convexity of the loss function. Our unified analysis applies to a host of existing algorithms such as proximal SGD, variance reduced methods, quantization and some coordinate descent-type methods. For the variance reduced methods, we recover the best known convergence rates as special cases. For proximal SGD, the quantization and coordinate-type methods, we uncover new state-of-the-art convergence rates. Our analysis also includes any form of sampling or minibatching. As such, we are able to determine the minibatch size that optimizes the total complexity of variance reduced methods. We showcase this by obtaining a simple formula for the optimal minibatch size of two variance reduced methods (L-SVRG and SAGA). This optimal minibatch size not only improves the theoretical total complexity of the methods but also improves their convergence in practice, as we show in several experiments.
引用
收藏
页码:499 / 540
页数:41
相关论文
共 50 条
  • [31] The Complexity of Making the Gradient Small in Stochastic Convex Optimization
    Foster, Dylan J.
    Sekhari, Ayush
    Shamir, Ohad
    Srebro, Nathan
    Sridharan, Karthik
    Woodworth, Blake
    CONFERENCE ON LEARNING THEORY, VOL 99, 2019, 99
  • [32] Stochastic intermediate gradient method for convex optimization problems
    A. V. Gasnikov
    P. E. Dvurechensky
    Doklady Mathematics, 2016, 93 : 148 - 151
  • [33] Stochastic intermediate gradient method for convex optimization problems
    Gasnikov, A. V.
    Dvurechensky, P. E.
    DOKLADY MATHEMATICS, 2016, 93 (02) : 148 - 151
  • [34] ON THE PRIVACY OF NOISY STOCHASTIC GRADIENT DESCENT FOR CONVEX OPTIMIZATION
    Altschuler, Jason M.
    Bok, Jinho
    Talwar, Kunal
    SIAM JOURNAL ON COMPUTING, 2024, 53 (04) : 969 - 1001
  • [35] BLOCK STOCHASTIC GRADIENT ITERATION FOR CONVEX AND NONCONVEX OPTIMIZATION
    Xu, Yangyang
    Yin, Wotao
    SIAM JOURNAL ON OPTIMIZATION, 2015, 25 (03) : 1686 - 1716
  • [36] Stochastic variable metric proximal gradient with variance reduction for non-convex composite optimization
    Fort, Gersende
    Moulines, Eric
    STATISTICS AND COMPUTING, 2023, 33 (03)
  • [37] Stochastic variable metric proximal gradient with variance reduction for non-convex composite optimization
    Gersende Fort
    Eric Moulines
    Statistics and Computing, 2023, 33 (3)
  • [38] Gradient Methods for Non-convex Optimization
    Prateek Jain
    Journal of the Indian Institute of Science, 2019, 99 : 247 - 256
  • [39] Private Adaptive Gradient Methods for Convex Optimization
    Asi, Hilal
    Duchi, John
    Fallah, Alireza
    Javidbakht, Omid
    Talwar, Kunal
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [40] Gradient Methods for Non-convex Optimization
    Jain, Prateek
    JOURNAL OF THE INDIAN INSTITUTE OF SCIENCE, 2019, 99 (02) : 247 - 256