Convergence in quadratic mean of averaged stochastic gradient algorithms without strong convexity nor bounded gradient

被引:1
|
作者
Godichon-Baggioni, Antoine [1 ,2 ]
机构
[1] Sorbonne Univ, Lab Probabil Stat & Modelisat, Paris, France
[2] Sorbonne Univ, Lab Probabil Stat & Modelisat, F-75005 Paris, France
关键词
Stochastic optimization; stochastic gradient algorithm; averaging; online learning; non-asymptotic convergence; HILBERT-SPACES; DESCENT;
D O I
10.1080/02331888.2023.2213371
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Online averaged stochastic gradient algorithms are more and more studied since (i) they can deal quickly with large sample taking values in high-dimensional spaces, (ii) they enable to treat data sequentially, (iii) they are known to be asymptotically efficient. In this paper, we focus on giving explicit bounds of the quadratic mean error of the estimates, and this, without supposing that the function we would like to minimize is strongly convex or admits a bounded gradient.
引用
收藏
页码:637 / 668
页数:32
相关论文
共 50 条
  • [1] RETRACTION: Convergence in quadratic mean of averaged stochastic gradient algorithms without strong convexity nor bounded gradient
    Godichon-Baggioni, Antoine
    STATISTICS, 2024, 58 (06) : 1531 - 1531
  • [2] On the rates of convergence of parallelized averaged stochastic gradient algorithms
    Godichon-Baggioni, Antoine
    Saadane, Sofiane
    STATISTICS, 2020, 54 (03) : 618 - 635
  • [3] Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
    Bach, F. (francis.bach@ens.fr), 1600, Microtome Publishing (15):
  • [4] Adaptivity of Averaged Stochastic Gradient Descent to Local Strong Convexity for Logistic Regression
    Bach, Francis
    JOURNAL OF MACHINE LEARNING RESEARCH, 2014, 15 : 595 - 627
  • [5] On Linear Convergence of Non-Euclidean Gradient Methods without Strong Convexity and Lipschitz Gradient Continuity
    Heinz H. Bauschke
    Jérôme Bolte
    Jiawei Chen
    Marc Teboulle
    Xianfu Wang
    Journal of Optimization Theory and Applications, 2019, 182 : 1068 - 1087
  • [6] On Linear Convergence of Non-Euclidean Gradient Methods without Strong Convexity and Lipschitz Gradient Continuity
    Bauschke, Heinz H.
    Bolte, Jerome
    Chen, Jiawei
    Teboulle, Marc
    Wang, Xianfu
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2019, 182 (03) : 1068 - 1087
  • [7] Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions
    Lei, Yunwen
    Hu, Ting
    Li, Guiying
    Tang, Ke
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) : 4394 - 4400
  • [8] Convergence analysis of gradient descent stochastic algorithms
    Shapiro, A
    Wardi, Y
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 1996, 91 (02) : 439 - 454
  • [9] New convergence aspects of stochastic gradient algorithms
    Nguyen, Lam M.
    Nguyen, Phuong Ha
    Richtárik, Peter
    Scheinberg, Katya
    Takáč, Martin
    van Dijk, Marten
    Journal of Machine Learning Research, 2019, 20
  • [10] New Convergence Aspects of Stochastic Gradient Algorithms
    Nguyen, Lam M.
    Phuong Ha Nguyen
    Richtarik, Peter
    Scheinberg, Katya
    Takac, Martin
    van Dijk, Marten
    JOURNAL OF MACHINE LEARNING RESEARCH, 2019, 20