A Nesterov-Like Gradient Tracking Algorithm for Distributed Optimization Over Directed Networks

被引:43
|
作者
Lu, Qingguo [1 ]
Liao, Xiaofeng [2 ]
Li, Huaqing [1 ]
Huang, Tingwen [3 ]
机构
[1] Southwest Univ, Coll Elect & Informat Engn, Chongqing Key Lab Nonlinear Circuits & Intelligen, Chongqing 400715, Peoples R China
[2] Chongqing Univ, Coll Comp, Chongqing 400044, Peoples R China
[3] Texas A&M Univ Qatar, Sci Program, Doha, Qatar
基金
中国国家自然科学基金;
关键词
Convergence; Cost function; Convex functions; Acceleration; Delays; Information processing; Directed network; distributed convex optimization; gradient tracking; linear convergence; Nesterov-like algorithm; LINEAR MULTIAGENT SYSTEMS; CONVERGENCE; CONSENSUS; GRAPHS; ADMM;
D O I
10.1109/TSMC.2019.2960770
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we concentrate on dealing with the distributed optimization problem over a directed network, where each unit possesses its own convex cost function and the principal target is to minimize a global cost function (formulated by the average of all local cost functions) while obeying the network connectivity structure. Most of the existing methods, such as push-sum strategy, have eliminated the unbalancedness induced by the directed network via utilizing column-stochastic weights, which may be infeasible if the distributed implementation requires each unit to gain access to (at least) its out-degree information. In contrast, to be suitable for the directed networks with row-stochastic weights, we propose a new directed distributed Nesterov-like gradient tracking algorithm, named as D-DNGT, that incorporates the gradient tracking into the distributed Nesterov method with momentum terms and employs nonuniform step-sizes. D-DNGT extends a number of outstanding consensus algorithms over strongly connected directed networks. The implementation of D-DNGT is straightforward if each unit locally chooses a suitable step-size and privately regulates the weights on information that acquires from in-neighbors. If the largest step-size and the maximum momentum coefficient are positive and small sufficiently, we can prove that D-DNGT converges linearly to the optimal solution provided that the cost functions are smooth and strongly convex. We provide numerical experiments to confirm the findings in this article and contrast D-DNGT with recently proposed distributed optimization approaches.
引用
收藏
页码:6258 / 6270
页数:13
相关论文
共 50 条
  • [1] A Distributed Nesterov-Like Gradient Tracking Algorithm for Composite Constrained Optimization
    Zheng, Lifeng
    Li, Huaqing
    Li, Jun
    Wang, Zheng
    Lu, Qingguo
    Shi, Yawei
    Wang, Huiwei
    Dong, Tao
    Ji, Lianghao
    Xia, Dawen
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2023, 9 : 60 - 73
  • [2] An Improved Distributed Nesterov Gradient Tracking Algorithm for Smooth Convex Optimization Over Directed Networks
    Lin, Yifu
    Li, Wenling
    Zhang, Bin
    Du, Junping
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2025, 70 (04) : 2738 - 2745
  • [3] Distributed Nesterov-like Gradient Algorithms
    Jakovetic, Dusan
    Moura, Jose M. F.
    Xavier, Joao
    2012 IEEE 51ST ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2012, : 5459 - 5464
  • [4] Convergence Rates of Distributed Nesterov-Like Gradient Methods on Random Networks
    Jakovetic, Dusan
    Freitas Xavier, Joao Manuel
    Moura, Jose M. F.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2014, 62 (04) : 868 - 882
  • [5] A Robust Gradient Tracking Method for Distributed Optimization over Directed Networks
    Pu, Shi
    2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2020, : 2335 - 2341
  • [6] A decentralized Nesterov gradient method for stochastic optimization over unbalanced directed networks
    Hu, Jinhui
    Xia, Dawen
    Cheng, Huqiang
    Feng, Liping
    Ji, Lianghao
    Guo, Jing
    Li, Huaqing
    ASIAN JOURNAL OF CONTROL, 2022, 24 (02) : 576 - 593
  • [7] Distributed Stochastic Optimization with Gradient Tracking over Time-Varying Directed Networks
    Duong Thuy
    Anh Nguyen
    Duong Tung Nguyen
    Nedic, Angelia
    FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF, 2023, : 1605 - 1609
  • [8] Barzilai-Borwein gradient tracking method for distributed optimization over directed networks
    Gao J.
    Liu X.-E.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2023, 40 (09): : 1637 - 1645
  • [9] Accelerated Nesterov Design for Distributed Optimization Over Directed Graphs
    Zhoubin Zhang
    Xiaoqi Yin
    Yuan Fan
    Songsong Cheng
    International Journal of Control, Automation and Systems, 2025, 23 (4) : 1058 - 1068
  • [10] A Fast Algorithm for Distributed Optimization over Directed Networks
    Zeng, Jinshan
    He, Tao
    2016 IEEE INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2016, : 45 - 49