A Nesterov-Like Gradient Tracking Algorithm for Distributed Optimization Over Directed Networks

被引:43
|
作者
Lu, Qingguo [1 ]
Liao, Xiaofeng [2 ]
Li, Huaqing [1 ]
Huang, Tingwen [3 ]
机构
[1] Southwest Univ, Coll Elect & Informat Engn, Chongqing Key Lab Nonlinear Circuits & Intelligen, Chongqing 400715, Peoples R China
[2] Chongqing Univ, Coll Comp, Chongqing 400044, Peoples R China
[3] Texas A&M Univ Qatar, Sci Program, Doha, Qatar
基金
中国国家自然科学基金;
关键词
Convergence; Cost function; Convex functions; Acceleration; Delays; Information processing; Directed network; distributed convex optimization; gradient tracking; linear convergence; Nesterov-like algorithm; LINEAR MULTIAGENT SYSTEMS; CONVERGENCE; CONSENSUS; GRAPHS; ADMM;
D O I
10.1109/TSMC.2019.2960770
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we concentrate on dealing with the distributed optimization problem over a directed network, where each unit possesses its own convex cost function and the principal target is to minimize a global cost function (formulated by the average of all local cost functions) while obeying the network connectivity structure. Most of the existing methods, such as push-sum strategy, have eliminated the unbalancedness induced by the directed network via utilizing column-stochastic weights, which may be infeasible if the distributed implementation requires each unit to gain access to (at least) its out-degree information. In contrast, to be suitable for the directed networks with row-stochastic weights, we propose a new directed distributed Nesterov-like gradient tracking algorithm, named as D-DNGT, that incorporates the gradient tracking into the distributed Nesterov method with momentum terms and employs nonuniform step-sizes. D-DNGT extends a number of outstanding consensus algorithms over strongly connected directed networks. The implementation of D-DNGT is straightforward if each unit locally chooses a suitable step-size and privately regulates the weights on information that acquires from in-neighbors. If the largest step-size and the maximum momentum coefficient are positive and small sufficiently, we can prove that D-DNGT converges linearly to the optimal solution provided that the cost functions are smooth and strongly convex. We provide numerical experiments to confirm the findings in this article and contrast D-DNGT with recently proposed distributed optimization approaches.
引用
收藏
页码:6258 / 6270
页数:13
相关论文
共 50 条
  • [21] Momentum-based distributed gradient tracking algorithms for distributed aggregative optimization over unbalanced directed graphs
    Wang, Zhu
    Wang, Dong
    Lian, Jie
    Ge, Hongwei
    Wang, Wei
    AUTOMATICA, 2024, 164
  • [22] A Snapshot Gradient Tracking for Distributed Optimization over Digraphs
    Che, Keqin
    Yang, Shaofu
    ARTIFICIAL INTELLIGENCE, CICAI 2022, PT III, 2022, 13606 : 348 - 360
  • [23] Compressed Gradient Tracking Algorithm for Distributed Aggregative Optimization
    Chen, Liyuan
    Wen, Guanghui
    Liu, Hongzhe
    Yu, Wenwu
    Cao, Jinde
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (10) : 6576 - 6591
  • [24] Distributed Adaptive Gradient Algorithm With Gradient Tracking for Stochastic Nonconvex Optimization
    Han, Dongyu
    Liu, Kun
    Lin, Yeming
    Xia, Yuanqing
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (09) : 6333 - 6340
  • [25] An accelerated exact distributed first-order algorithm for optimization over directed networks
    Wang, Zheng
    Wang, Chengbo
    Wang, Jinmeng
    Hu, Jinhui
    Li, Huaqing
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2023, 360 (14): : 10706 - 10727
  • [26] A Modified Gradient Flow for Distributed Convex Optimization on Directed Networks
    Jahvani, Mohammad
    Guay, Martin
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 2785 - 2790
  • [27] Distributed Proximal Gradient Algorithm for Nonconvex Optimization Over Time-Varying Networks
    Jiang, Xia
    Zeng, Xianlin
    Sun, Jian
    Chen, Jie
    IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2023, 10 (02): : 1005 - 1017
  • [28] A PROXIMAL GRADIENT ALGORITHM FOR TRACKING CASCADES OVER NETWORKS
    Baingana, Brian
    Mateos, Gonzalo
    Giannakis, Georgios B.
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [29] A conditional gradient algorithm for distributed online optimization in networks
    Shen, Xiuyu
    Li, Dequan
    Fang, Runyue
    Dong, Qiao
    IET CONTROL THEORY AND APPLICATIONS, 2021, 15 (04): : 570 - 579
  • [30] An Accelerated Gradient Tracking Algorithm with Projection Error for Distributed Optimization
    Meng, Xiwang
    Liu, Qingshan
    Xiong, Jiang
    2023 15TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE, ICACI, 2023,