Almost Sure Convergence and Non-Asymptotic Concentration Bounds for Stochastic Mirror Descent Algorithm

被引:0
|
作者
Paul, Anik Kumar [1 ]
Mahindrakar, Arun D. [1 ]
Kalaimani, Rachel K. [1 ]
机构
[1] Indian Inst Technol Madras, Dept Elect Engn, Chennai 600036, India
来源
关键词
Mirror descent algorithm; almost sure convergence; concentration inequality; sub-gaussian random vectors; OPTIMIZATION;
D O I
10.1109/LCSYS.2024.3482148
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This letter investigates the convergence and concentration properties of the Stochastic Mirror Descent (SMD) algorithm utilizing biased stochastic subgradients. We establish the almost sure convergence of the algorithm's iterates under the assumption of diminishing bias. Furthermore, we derive concentration bounds for the discrepancy between the iterates' function values and the optimal value, based on standard assumptions. Subsequently, leveraging the assumption of Sub-Gaussian noise in stochastic subgradients, we present refined concentration bounds for this discrepancy.
引用
收藏
页码:2397 / 2402
页数:6
相关论文
共 50 条
  • [1] Almost sure convergence rates of stochastic proximal gradient descent algorithm
    Liang, Yuqing
    Xu, Dongpo
    OPTIMIZATION, 2024, 73 (08) : 2413 - 2446
  • [2] An Almost Sure Convergence Analysis of Zeroth-Order Mirror Descent Algorithm
    Paul, Anik Kumar
    Mahindrakar, Arun D.
    Kalaimani, Rachel K.
    2023 AMERICAN CONTROL CONFERENCE, ACC, 2023, : 855 - 860
  • [3] Almost sure convergence of stochastic composite objective mirror descent for non-convex non-smooth optimization
    Liang, Yuqing
    Xu, Dongpo
    Zhang, Naimin
    Mandic, Danilo P.
    OPTIMIZATION LETTERS, 2024, 18 (09) : 2113 - 2131
  • [4] Robust Analysis of Almost Sure Convergence of Zeroth-Order Mirror Descent Algorithm
    Paul, Anik Kumar
    Mahindrakar, Arun D.
    Kalaimani, Rachel K.
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 1933 - 1938
  • [5] On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems
    Mertikopoulos, Panayotis
    Hallak, Nadav
    Kavis, Ali
    Cevher, Volkan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [6] Almost sure convergence of randomised-difference descent algorithm for stochastic convex optimisation
    Geng, Xiaoxue
    Huang, Gao
    Zhao, Wenxiao
    IET CONTROL THEORY AND APPLICATIONS, 2021, 15 (17): : 2183 - 2194
  • [7] THE PERTURBED PROX-PRECONDITIONED SPIDER ALGORITHM: NON-ASYMPTOTIC CONVERGENCE BOUNDS
    Fort, G.
    Moulines, E.
    2021 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), 2021, : 96 - 100
  • [8] Non-Asymptotic Guarantees for Sampling by Stochastic Gradient Descent
    A. G. Karagulyan
    Journal of Contemporary Mathematical Analysis (Armenian Academy of Sciences), 2019, 54 : 71 - 78
  • [9] Non-Asymptotic Guarantees for Sampling by Stochastic Gradient Descent
    Karagulyan, A. G.
    JOURNAL OF CONTEMPORARY MATHEMATICAL ANALYSIS-ARMENIAN ACADEMY OF SCIENCES, 2019, 54 (02): : 71 - 78
  • [10] Non-asymptotic convergence bounds for modified tamed unadjusted Langevin algorithm in non-convex setting
    Neufeld, Ariel
    Ng, Matthew
    Zhang, Ying
    JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS, 2025, 543 (01)