SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization

被引:0
|
作者
Wai, Hoi-To [1 ]
Freris, Nikolaos M. [2 ,3 ]
Nedic, Angelia [1 ]
Scaglione, Anna [1 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85281 USA
[2] New York Univ Abu Dhabi, Div Engn, Abu Dhabi, U Arab Emirates
[3] NYU, Tandon Sch Engn, Brooklyn, NY USA
关键词
Distributed optimization; Incremental methods; Asynchronous algorithms; Randomized algorithms; Multi-agent systems; Machine learning; SUBGRADIENT METHODS; CLOCKS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gradient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We analyze our method under the general asynchronous model of computation, in which each function is selected infinitely often with possibly unbounded (but sublinear) delay. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on an undirected communication graph. We show that our analysis applies as long as the graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merits of our algorithm over existing methods.
引用
收藏
页码:1751 / 1756
页数:6
相关论文
共 50 条
  • [41] Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization
    Hendrikx, Hadrien
    Xiao, Lin
    Bubeck, Sebastien
    Bach, Francis
    Massoulie, Laurent
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [42] Gradient-free method for nonsmooth distributed optimization
    Li, Jueyou
    Wu, Changzhi
    Wu, Zhiyou
    Long, Qiang
    JOURNAL OF GLOBAL OPTIMIZATION, 2015, 61 (02) : 325 - 340
  • [43] On the convergence of the conditional gradient method in distributed optimization problems
    A. V. Chernov
    Computational Mathematics and Mathematical Physics, 2011, 51 : 1510 - 1523
  • [44] Spectral-like gradient method for distributed optimization
    Jakovetic, Dusan
    Krejic, Natasa
    Jerinkic, Natasa Krklec
    PROCEEDINGS OF 18TH INTERNATIONAL CONFERENCE ON SMART TECHNOLOGIES (IEEE EUROCON 2019), 2019,
  • [45] Gradient-free method for nonsmooth distributed optimization
    Jueyou Li
    Changzhi Wu
    Zhiyou Wu
    Qiang Long
    Journal of Global Optimization, 2015, 61 : 325 - 340
  • [47] DIRECT METHOD FOR OPTIMIZATION OF STOCHASTIC DISTRIBUTED PARAMETER SYSTEMS
    AIDAROUS, SE
    INTERNATIONAL JOURNAL OF CONTROL, 1975, 21 (06) : 929 - 943
  • [48] A distributed stochastic optimization algorithm with gradient-tracking and distributed heavy-ball acceleration
    Sun, Bihao
    Hu, Jinhui
    Xia, Dawen
    Li, Huaqing
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2021, 22 (11) : 1463 - 1476
  • [49] A linearly convergent stochastic recursive gradient method for convex optimization
    Yan Liu
    Xiao Wang
    Tiande Guo
    Optimization Letters, 2020, 14 : 2265 - 2283
  • [50] A Heuristic Adaptive Fast Gradient Method in Stochastic Optimization Problems
    Ogal'tsov, A. V.
    Tyurin, A. I.
    COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS, 2020, 60 (07) : 1108 - 1115