Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

被引:0
|
作者
Karimi, Belhal [1 ]
Miasojedow, Blazej [2 ]
Moulines, Eric [1 ]
Wai, Hoi-To [3 ]
机构
[1] Ecole Polytechn, CMAP, Palaiseau, France
[2] Univ Warsaw, Fac Math Informat & Mech, Warsaw, Poland
[3] Chinese Univ Hong Kong, Dept SEEM, Hong Kong, Peoples R China
来源
关键词
biased stochastic approximation; state-dependent Markov chain; non-convex optimization; policy gradient; online expectation-maximization; GRADIENT; OPTIMIZATION; CONVERGENCE; ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates. We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.
引用
收藏
页数:31
相关论文
共 50 条
  • [21] Non-asymptotic error estimates for the Laplace approximation in Bayesian inverse problems
    Tapio Helin
    Remo Kretschmann
    Numerische Mathematik, 2022, 150 : 521 - 549
  • [22] Non-asymptotic convergence bounds for Wasserstein approximation using point clouds
    Merigot, Quentin
    Santambrogio, Filippo
    Sarrazin, Clement
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [23] Non-asymptotic Gaussian estimates for the recursive approximation of the invariant distribution of a diffusion
    Honor, I
    Menozzi, S.
    Pages, G.
    ANNALES DE L INSTITUT HENRI POINCARE-PROBABILITES ET STATISTIQUES, 2020, 56 (03): : 1559 - 1605
  • [24] Non-asymptotic error estimates for the Laplace approximation in Bayesian inverse problems
    Helin, Tapio
    Kretschmann, Remo
    NUMERISCHE MATHEMATIK, 2022, 150 (02) : 521 - 549
  • [25] Non-Asymptotic Analysis for Two Time-scale TDC with General Smooth Function Approximation
    Wang, Yue
    Zou, Shaofeng
    Zhou, Yi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [26] Non-asymptotic statistical tests of the diffusion coefficient of stochastic differential equations
    Melnykova, Anna
    Reynaud-Bouret, Patricia
    Samson, Adeline
    STOCHASTIC PROCESSES AND THEIR APPLICATIONS, 2024, 173
  • [27] Stochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory
    Zhang, Jianyi
    Zhang, Ruiyi
    Carin, Lawrence
    Chen, Changyou
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 1877 - 1886
  • [28] Non-asymptotic calibration and resolution
    Vovk, V
    ALGORITHMIC LEARNING THEORY, 2005, 3734 : 429 - 443
  • [29] Non-Asymptotic Analysis for Relational Learning with One Network
    He, Peng
    Zhang, Changshui
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 33, 2014, 33 : 320 - 327
  • [30] Feedback in the Non-Asymptotic Regime
    Polyanskiy, Yury
    Poor, H. Vincent
    Verdu, Sergio
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2011, 57 (08) : 4903 - 4925