Decentralized Asynchronous Nonconvex Stochastic Optimization on Directed Graphs

被引:3
|
作者
Kungurtsev, Vyacheslav [1 ]
Morafah, Mahdi [2 ]
Javidi, Tara [2 ]
Scutari, Gesualdo [3 ]
机构
[1] Czech Tech Univ, Dept Comp Sci, Prague, Czech Republic
[2] Univ Calif San Diego, Dept Elect Engn, La Jolla, CA 92093 USA
[3] Purdue Univ, Sch Ind Engn, W Lafayette, IN 47907 USA
来源
关键词
Optimization; Stochastic processes; Convergence; Delays; Directed graphs; Noise measurement; Linear programming; Decentralized applications; distributed computing; federated learning; machine learning; optimization; optimization methods; CONSENSUS;
D O I
10.1109/TCNS.2023.3242043
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we consider a decentralized stochastic optimization problem over a network of agents, modeled as a directed graph: Agents aim to asynchronously minimize the average of their individual losses (possibly nonconvex), each one having access only to a noisy estimate of the gradient of its own function. We propose an asynchronous distributed algorithm for such a class of problems. The algorithm combines stochastic gradients with tracking in an asynchronous push-sum framework and obtains a sublinear convergence rate, matching the rate of the centralized stochastic gradient descent applied to the nonconvex minimization. Our experiments on a nonconvex image classification task using a convolutional neural network validate the convergence of our proposed algorithm across a different number of nodes and graph connectivity percentages.
引用
收藏
页码:1796 / 1804
页数:9
相关论文
共 50 条
  • [31] Decentralized nonconvex optimization with guaranteed privacy and accuracy
    Wang, Yongqiang
    Basar, Tamer
    PATTERN RECOGNITION, 2023, 138
  • [32] VARIANCE REDUCED STOCHASTIC OPTIMIZATION OVER DIRECTED GRAPHS WITH ROW AND COLUMN STOCHASTIC WEIGHTS
    Qureshi, Muhammad I.
    Xin, Ran
    Kar, Soummya
    Khan, Usman A.
    FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF, 2023, : 1595 - 1599
  • [33] Collaborative Learning in the Jungle (Decentralized, Byzantine, Heterogeneous, Asynchronous and Nonconvex Learning)
    El-Mhamdi, El-Mandi
    Farhadkhani, Sadegh
    Guerraoui, Rachid
    Guirguis, Arsany
    Le-Nguyen Hoang
    Rouault, Sebastien
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [34] Asynchronous Decentralized Parallel Stochastic Gradient Descent
    Lian, Xiangru
    Zhang, Wei
    Zhang, Ce
    Liu, Ji
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [35] Asynchronous Decentralized Accelerated Stochastic Gradient Descent
    Lan G.
    Zhou Y.
    Zhou, Yi (yi.zhou@ibm.com), 1600, Institute of Electrical and Electronics Engineers Inc. (02): : 802 - 811
  • [36] A decentralized Nesterov gradient method for stochastic optimization over unbalanced directed networks
    Hu, Jinhui
    Xia, Dawen
    Cheng, Huqiang
    Feng, Liping
    Ji, Lianghao
    Guo, Jing
    Li, Huaqing
    ASIAN JOURNAL OF CONTROL, 2022, 24 (02) : 576 - 593
  • [37] Asynchronous Decentralized Optimization in Heterogeneous Systems
    Rabbat, Michael G.
    Tsianos, Konstantinos I.
    2014 IEEE 53RD ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2014, : 1125 - 1130
  • [38] Decentralized Asynchronous Particle Swarm Optimization
    Akat, S. Burak
    Gazi, Veysel
    2008 IEEE SWARM INTELLIGENCE SYMPOSIUM, 2008, : 194 - 201
  • [39] Duality gaps in nonconvex stochastic optimization
    Darinka Dentcheva
    Werner Römisch
    Mathematical Programming, 2004, 101 : 515 - 535
  • [40] Stochastic Variance Reduction for Nonconvex Optimization
    Reddi, Sashank J.
    Hefny, Ahmed
    Sra, Suvrit
    Poczos, Barnabas
    Smola, Alex
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48