Convergence in quadratic mean of averaged stochastic gradient algorithms without strong convexity nor bounded gradient

被引:1
|
作者
Godichon-Baggioni, Antoine [1 ,2 ]
机构
[1] Sorbonne Univ, Lab Probabil Stat & Modelisat, Paris, France
[2] Sorbonne Univ, Lab Probabil Stat & Modelisat, F-75005 Paris, France
关键词
Stochastic optimization; stochastic gradient algorithm; averaging; online learning; non-asymptotic convergence; HILBERT-SPACES; DESCENT;
D O I
10.1080/02331888.2023.2213371
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Online averaged stochastic gradient algorithms are more and more studied since (i) they can deal quickly with large sample taking values in high-dimensional spaces, (ii) they enable to treat data sequentially, (iii) they are known to be asymptotically efficient. In this paper, we focus on giving explicit bounds of the quadratic mean error of the estimates, and this, without supposing that the function we would like to minimize is strongly convex or admits a bounded gradient.
引用
收藏
页码:637 / 668
页数:32
相关论文
共 50 条