A Nesterov-Like Gradient Tracking Algorithm for Distributed Optimization Over Directed Networks

被引:43
|
作者
Lu, Qingguo [1 ]
Liao, Xiaofeng [2 ]
Li, Huaqing [1 ]
Huang, Tingwen [3 ]
机构
[1] Southwest Univ, Coll Elect & Informat Engn, Chongqing Key Lab Nonlinear Circuits & Intelligen, Chongqing 400715, Peoples R China
[2] Chongqing Univ, Coll Comp, Chongqing 400044, Peoples R China
[3] Texas A&M Univ Qatar, Sci Program, Doha, Qatar
基金
中国国家自然科学基金;
关键词
Convergence; Cost function; Convex functions; Acceleration; Delays; Information processing; Directed network; distributed convex optimization; gradient tracking; linear convergence; Nesterov-like algorithm; LINEAR MULTIAGENT SYSTEMS; CONVERGENCE; CONSENSUS; GRAPHS; ADMM;
D O I
10.1109/TSMC.2019.2960770
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we concentrate on dealing with the distributed optimization problem over a directed network, where each unit possesses its own convex cost function and the principal target is to minimize a global cost function (formulated by the average of all local cost functions) while obeying the network connectivity structure. Most of the existing methods, such as push-sum strategy, have eliminated the unbalancedness induced by the directed network via utilizing column-stochastic weights, which may be infeasible if the distributed implementation requires each unit to gain access to (at least) its out-degree information. In contrast, to be suitable for the directed networks with row-stochastic weights, we propose a new directed distributed Nesterov-like gradient tracking algorithm, named as D-DNGT, that incorporates the gradient tracking into the distributed Nesterov method with momentum terms and employs nonuniform step-sizes. D-DNGT extends a number of outstanding consensus algorithms over strongly connected directed networks. The implementation of D-DNGT is straightforward if each unit locally chooses a suitable step-size and privately regulates the weights on information that acquires from in-neighbors. If the largest step-size and the maximum momentum coefficient are positive and small sufficiently, we can prove that D-DNGT converges linearly to the optimal solution provided that the cost functions are smooth and strongly convex. We provide numerical experiments to confirm the findings in this article and contrast D-DNGT with recently proposed distributed optimization approaches.
引用
收藏
页码:6258 / 6270
页数:13
相关论文
共 50 条
  • [31] A stochastic gradient tracking algorithm with adaptive momentum for distributed optimization
    Li, Yantao
    Hu, Hanqing
    Zhang, Keke
    Lu, Qingguo
    Deng, Shaojiang
    Li, Huaqing
    NEUROCOMPUTING, 2025, 637
  • [32] Event-triggered zero-gradient-sum distributed consensus optimization over directed networks
    Chen, Weisheng
    Ren, Wei
    AUTOMATICA, 2016, 65 : 90 - 97
  • [33] Event-Triggered Distributed Optimization Algorithm over Directed Networks: A Nonsingular Estimator Approach
    Xian, Chengxin
    Tao, Qianle
    Liu, Yongfang
    Wang, Huimin
    Zhao, Yu
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 3884 - 3889
  • [34] A Continuous-Time Gradient-Tracking Algorithm for Directed Networks
    Dhullipalla, Mani H.
    Chen, Tongwen
    IEEE CONTROL SYSTEMS LETTERS, 2024, 8 : 2199 - 2204
  • [35] Gradient-Based Distributed Controller Design Over Directed Networks
    Watanabe, Yuto
    Sakurama, Kazunori
    Ahn, Hyo-Sung
    IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2024, 11 (04): : 1998 - 2009
  • [36] Distributed Stochastic Algorithm for Convex Optimization Over Directed Graphs
    Cheng, Songsong
    Liang, Shu
    Hong, Yiguang
    PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 101 - 106
  • [37] Convergence of Distributed Accelerated Algorithm Over Unbalanced Directed Networks
    Li, Huaqing
    Lu, Qingguo
    Chen, Guo
    Huang, Tingwen
    Dong, Zhaoyang
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2021, 51 (08): : 5153 - 5164
  • [38] Distributed stochastic compositional optimization problems over directed networks
    Shengchao Zhao
    Yongchao Liu
    Computational Optimization and Applications, 2024, 87 : 249 - 288
  • [39] Distributed stochastic compositional optimization problems over directed networks
    Zhao, Shengchao
    Liu, Yongchao
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2024, 87 (01) : 249 - 288
  • [40] S-DIGing: A Stochastic Gradient Tracking Algorithm for Distributed Optimization
    Li, Huaqing
    Zheng, Lifeng
    Wang, Zheng
    Yan, Yu
    Feng, Liping
    Guo, Jing
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (01): : 53 - 65