A Nesterov-Like Gradient Tracking Algorithm for Distributed Optimization Over Directed Networks

被引:43
|
作者
Lu, Qingguo [1 ]
Liao, Xiaofeng [2 ]
Li, Huaqing [1 ]
Huang, Tingwen [3 ]
机构
[1] Southwest Univ, Coll Elect & Informat Engn, Chongqing Key Lab Nonlinear Circuits & Intelligen, Chongqing 400715, Peoples R China
[2] Chongqing Univ, Coll Comp, Chongqing 400044, Peoples R China
[3] Texas A&M Univ Qatar, Sci Program, Doha, Qatar
基金
中国国家自然科学基金;
关键词
Convergence; Cost function; Convex functions; Acceleration; Delays; Information processing; Directed network; distributed convex optimization; gradient tracking; linear convergence; Nesterov-like algorithm; LINEAR MULTIAGENT SYSTEMS; CONVERGENCE; CONSENSUS; GRAPHS; ADMM;
D O I
10.1109/TSMC.2019.2960770
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we concentrate on dealing with the distributed optimization problem over a directed network, where each unit possesses its own convex cost function and the principal target is to minimize a global cost function (formulated by the average of all local cost functions) while obeying the network connectivity structure. Most of the existing methods, such as push-sum strategy, have eliminated the unbalancedness induced by the directed network via utilizing column-stochastic weights, which may be infeasible if the distributed implementation requires each unit to gain access to (at least) its out-degree information. In contrast, to be suitable for the directed networks with row-stochastic weights, we propose a new directed distributed Nesterov-like gradient tracking algorithm, named as D-DNGT, that incorporates the gradient tracking into the distributed Nesterov method with momentum terms and employs nonuniform step-sizes. D-DNGT extends a number of outstanding consensus algorithms over strongly connected directed networks. The implementation of D-DNGT is straightforward if each unit locally chooses a suitable step-size and privately regulates the weights on information that acquires from in-neighbors. If the largest step-size and the maximum momentum coefficient are positive and small sufficiently, we can prove that D-DNGT converges linearly to the optimal solution provided that the cost functions are smooth and strongly convex. We provide numerical experiments to confirm the findings in this article and contrast D-DNGT with recently proposed distributed optimization approaches.
引用
收藏
页码:6258 / 6270
页数:13
相关论文
共 50 条
  • [41] A distributed estimation algorithm for tracking over wireless sensor networks
    Speranzon, Alberto
    Fischione, Carlo
    Johansson, Karl Henrik
    2007 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, VOLS 1-14, 2007, : 3088 - +
  • [42] A Proximal Gradient Algorithm for Composite Consensus Optimization over Directed Graphs
    Zeng, Jinshan
    He, Tao
    Ouyang, Shikang
    Wang, Mingwen
    Chang, Xiangyu
    2017 IEEE 7TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2017, : 825 - 830
  • [43] A Privacy Preserving Distributed Optimization Algorithm for Economic Dispatch Over Time-Varying Directed Networks
    Mao, Shuai
    Tang, Yang
    Dong, Ziwei
    Meng, Ke
    Dong, Zhao Yang
    Qian, Feng
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (03) : 1689 - 1701
  • [44] Distributed Constrained Optimization Over Unbalanced Directed Networks Using Asynchronous Broadcast-Based Algorithm
    Li, Huaqing
    Lu, Qingguo
    Chen, Guo
    Huang, Tingwen
    Dong, Zhaoyang
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (03) : 1102 - 1115
  • [45] Distributed discrete-time optimization over directed networks: A dynamic event-triggered algorithm
    Yuan, Yang
    He, Wangli
    Tian, Yu-Chu
    Du, Wenli
    Qian, Feng
    INFORMATION SCIENCES, 2023, 642
  • [46] Distributed Optimization Over Directed Graphs with Continuous-Time Algorithm
    Jia, Wenwen
    Qin, Sitian
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 1911 - 1916
  • [47] A fully distributed convex optimization algorithm over the unbalanced directed network
    Shi X.-S.
    Lin Z.-Y.
    Wang X.-S.
    Dong S.-J.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2022, 39 (06): : 1071 - 1078
  • [48] Distributed Nesterov Gradient and Heavy-Ball Double Accelerated Asynchronous Optimization
    Li, Huaqing
    Cheng, Huqiang
    Wang, Zheng
    Wu, Guo-Cheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (12) : 5723 - 5737
  • [49] DISTRIBUTED NESTEROV GRADIENT METHODS FOR RANDOM NETWORKS: CONVERGENCE IN PROBABILITY AND CONVERGENCE RATES
    Jakovetic, Dusan
    Xavier, Joao
    Moura, Jose M. F.
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [50] The Barzilai–Borwein Method for distributed optimization over unbalanced directed networks
    Hu, Jinhui
    Chen, Xin
    Zheng, Lifeng
    Zhang, Ling
    Li, Huaqing
    Engineering Applications of Artificial Intelligence, 2021, 99