Multi-Consensus Decentralized Accelerated Gradient Descent

被引:0
|
作者
Ye, Haishan [1 ]
Luo, Luo [2 ]
Zhou, Ziang [3 ]
Zhang, Tong [4 ]
机构
[1] Xi An Jiao Tong Univ, Ctr Intelligent Decis Making & Machine Learning, Sch Management, Xian, Peoples R China
[2] Fudan Univ, Sch Data Sci, Shanghai, Peoples R China
[3] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[4] Hong Kong Univ Sci & Technol, Comp Sci & Math, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
consensus optimization; decentralized algorithm; accelerated gradient descent; gradient tracking; composite optimization; DISTRIBUTED OPTIMIZATION; LINEAR CONVERGENCE; ALGORITHMS; COMMUNICATION; ITERATIONS; EXTRA;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper considers the decentralized convex optimization problem, which has a wide range of applications in large-scale machine learning, sensor networks, and control theory. We propose novel algorithms that achieve optimal computation complexity and near optimal communication complexity. Our theoretical results give affirmative answers to the open problem on whether there exists an algorithm that can achieve a communication complexity (nearly) matching the lower bound depending on the global condition number instead of the local one. Furthermore, the linear convergence of our algorithms only depends on the strong convexity of global objective and it does not require the local functions to be convex. The design of our methods relies on a novel integration of well-known techniques including Nesterov's acceleration, multi-consensus and gradient-tracking. Empirical studies show the outperformance of our methods for machine learning applications.
引用
收藏
页数:50
相关论文
共 50 条
  • [21] ONLINE MULTI-LABEL LEARNING WITH ACCELERATED NONSMOOTH STOCHASTIC GRADIENT DESCENT
    Park, Sunho
    Choi, Seungjin
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 3322 - 3326
  • [22] Multi-consensus of multi-agent networks via a rectangular impulsive approach
    Han, Guang-Song
    Guan, Zhi-Hong
    Li, Juan
    Liao, Rui-Quan
    Cheng, Xin-Ming
    SYSTEMS & CONTROL LETTERS, 2015, 76 : 28 - 34
  • [23] Decentralized Policy Gradient Descent Ascent for Safe Multi-Agent Reinforcement Learning
    Lu, Songtao
    Zhang, Kaiqing
    Chen, Tianyi
    Basar, Tamer
    Horesh, Lior
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8767 - 8775
  • [24] Accelerated gradient descent methods with line search
    Stanimirovic, Predrag S.
    Miladinovic, Marko B.
    NUMERICAL ALGORITHMS, 2010, 54 (04) : 503 - 520
  • [25] Conditional Accelerated Lazy Stochastic Gradient Descent
    Lan, Guanghui
    Pokutta, Sebastian
    Zhou, Yi
    Zink, Daniel
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [26] A new connection protocol for multi-consensus of discrete-time systems
    Mattioni, M.
    Monaco, S.
    Normand-Cyrot, D.
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 5179 - 5184
  • [27] Accelerated gradient descent methods with line search
    Predrag S. Stanimirović
    Marko B. Miladinović
    Numerical Algorithms, 2010, 54 : 503 - 520
  • [28] BRIDGE: Byzantine-Resilient Decentralized Gradient Descent
    Fang, Cheng
    Yang, Zhixiong
    Bajwa, Waheed U.
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2022, 8 : 610 - 626
  • [29] Random Walk Gradient Descent for Decentralized Learning on Graphs
    Ayache, Ghadir
    El Rouayheb, Salim
    2019 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2019, : 926 - 931
  • [30] Network Gradient Descent Algorithm for Decentralized Federated Learning
    Wu, Shuyuan
    Huang, Danyang
    Wang, Hansheng
    JOURNAL OF BUSINESS & ECONOMIC STATISTICS, 2023, 41 (03) : 806 - 818