Multi-Consensus Decentralized Accelerated Gradient Descent

被引:0
|
作者
Ye, Haishan [1 ]
Luo, Luo [2 ]
Zhou, Ziang [3 ]
Zhang, Tong [4 ]
机构
[1] Xi An Jiao Tong Univ, Ctr Intelligent Decis Making & Machine Learning, Sch Management, Xian, Peoples R China
[2] Fudan Univ, Sch Data Sci, Shanghai, Peoples R China
[3] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[4] Hong Kong Univ Sci & Technol, Comp Sci & Math, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
consensus optimization; decentralized algorithm; accelerated gradient descent; gradient tracking; composite optimization; DISTRIBUTED OPTIMIZATION; LINEAR CONVERGENCE; ALGORITHMS; COMMUNICATION; ITERATIONS; EXTRA;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper considers the decentralized convex optimization problem, which has a wide range of applications in large-scale machine learning, sensor networks, and control theory. We propose novel algorithms that achieve optimal computation complexity and near optimal communication complexity. Our theoretical results give affirmative answers to the open problem on whether there exists an algorithm that can achieve a communication complexity (nearly) matching the lower bound depending on the global condition number instead of the local one. Furthermore, the linear convergence of our algorithms only depends on the strong convexity of global objective and it does not require the local functions to be convex. The design of our methods relies on a novel integration of well-known techniques including Nesterov's acceleration, multi-consensus and gradient-tracking. Empirical studies show the outperformance of our methods for machine learning applications.
引用
收藏
页数:50
相关论文
共 50 条
  • [1] Asynchronous Decentralized Accelerated Stochastic Gradient Descent
    Lan G.
    Zhou Y.
    Zhou, Yi (yi.zhou@ibm.com), 1600, Institute of Electrical and Electronics Engineers Inc. (02): : 802 - 811
  • [2] Multi-consensus decentralized primal-dual fixed point algorithm for distributed learning
    Tang, Kejie
    Liu, Weidong
    Mao, Xiaojun
    MACHINE LEARNING, 2024, 113 (07) : 4315 - 4357
  • [3] ON THE CONVERGENCE OF DECENTRALIZED GRADIENT DESCENT
    Yuan, Kun
    Ling, Qing
    Yin, Wotao
    SIAM JOURNAL ON OPTIMIZATION, 2016, 26 (03) : 1835 - 1854
  • [4] On Nonconvex Decentralized Gradient Descent
    Zeng, Jinshan
    Yin, Wotao
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2018, 66 (11) : 2834 - 2848
  • [5] Consensus and multi-consensus for discrete-time LTI systems☆
    Cacace, Filippo
    Mattioni, Mattia
    Monaco, Salvatore
    Normand-Cyrot, Dorothee
    AUTOMATICA, 2024, 166
  • [6] On multi-consensus and almost equitable graph partitions
    Monaco, Salvatore
    Celsi, Lorenzo Ricciardi
    AUTOMATICA, 2019, 103 : 53 - 61
  • [7] Stability of Decentralized Gradient Descent in Open Multi-Agent Systems
    Hendrickx, Julien M.
    Rabbat, Michael G.
    2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2020, : 4885 - 4890
  • [8] MULTI-CONSENSUS OF NONLINEARLY NETWORKED MULTI-AGENT SYSTEMS
    Li, Juan
    Guan, Zhi-Hong
    Chen, Guanrong
    ASIAN JOURNAL OF CONTROL, 2015, 17 (01) : 157 - 164
  • [9] Comments on "On multi-consensus and almost equitable graph partitions"
    Ma, Jeong-Min
    Park, Nam-Jin
    Lee, Hyung-Gon
    Ahn, Hyo-Sung
    AUTOMATICA, 2024, 167
  • [10] ON THE UNIFIED DESIGN OF ACCELERATED GRADIENT DESCENT
    Chen, Yuquan
    Wei, Yiheng
    Wang, Yong
    Chen, YangQuan
    PROCEEDINGS OF THE ASME INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, 2019, VOL 9, 2019,