Communication-efficient algorithms for decentralized and stochastic optimization

被引:4
|
作者
Guanghui Lan
Soomin Lee
Yi Zhou
机构
[1] Georgia Institute of Technology,Department of Industrial and Systems Engineering
来源
Mathematical Programming | 2020年 / 180卷
关键词
Decentralized optimization; Decentralized machine learning; Communication efficient; Stochastic programming; Nonsmooth functions; Primal–dual method; Complexity; 90C25; 90C06; 90C22; 49M37; 93A14; 90C15;
D O I
暂无
中图分类号
学科分类号
摘要
We present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems defined over multiagent networks. Considering that communication is a major bottleneck in decentralized optimization, our main goal in this paper is to develop algorithmic frameworks which can significantly reduce the number of inter-node communications. Our major contribution is to present a new class of decentralized primal–dual type algorithms, namely the decentralized communication sliding (DCS) methods, which can skip the inter-node communications while agents solve the primal subproblems iteratively through linearizations of their local objective functions. By employing DCS, agents can find an ϵ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon $$\end{document}-solution both in terms of functional optimality gap and feasibility residual in O(1/ϵ)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathcal {O}}}(1/\epsilon )$$\end{document} (resp., O(1/ϵ)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathcal {O}}}(1/\sqrt{\epsilon })$$\end{document}) communication rounds for general convex functions (resp., strongly convex functions), while maintaining the O(1/ϵ2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathcal {O}}}(1/\epsilon ^2)$$\end{document} (resp., O(1/ϵ)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{O}(1/\epsilon )$$\end{document}) bound on the total number of intra-node subgradient evaluations. We also present a stochastic counterpart for these algorithms, denoted by SDCS, for solving stochastic optimization problems whose objective function cannot be evaluated exactly. In comparison with existing results for decentralized nonsmooth and stochastic optimization, we can reduce the total number of inter-node communication rounds by orders of magnitude while still maintaining the optimal complexity bounds on intra-node stochastic subgradient evaluations. The bounds on the (stochastic) subgradient evaluations are actually comparable to those required for centralized nonsmooth and stochastic optimization under certain conditions on the target accuracy.
引用
收藏
页码:237 / 284
页数:47
相关论文
共 50 条
  • [21] Communication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning
    Fang, Wenzhi
    Yu, Ziyi
    Jiang, Yuning
    Shi, Yuanming
    Jones, Colin N.
    Zhou, Yong
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2022, 70 : 5058 - 5073
  • [22] A Communication-Efficient Stochastic Gradient Descent Algorithm for Distributed Nonconvex Optimization
    Xie, Antai
    Yi, Xinlei
    Wang, Xiaofan
    Cao, Ming
    Ren, Xiaoqiang
    2024 IEEE 18TH INTERNATIONAL CONFERENCE ON CONTROL & AUTOMATION, ICCA 2024, 2024, : 609 - 614
  • [23] Communication-Efficient Device Scheduling for Federated Learning Using Stochastic Optimization
    Perazzone, Jake
    Wang, Shiqiang
    Ji, Mingyue
    Chan, Kevin S.
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2022), 2022, : 1449 - 1458
  • [24] Communication-Efficient Design for Quantized Decentralized Federated Learning
    Chen, Li
    Liu, Wei
    Chen, Yunfei
    Wang, Weidong
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 1175 - 1188
  • [25] Sample and Communication-Efficient Decentralized Actor-Critic Algorithms with Finite-Time Analysis
    Chen, Ziyi
    Zhou, Yi
    Chen, Rong-Rong
    Zou, Shaofeng
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [26] Communication-efficient and Scalable Decentralized Federated Edge Learning
    Yapp, Austine Zong Han
    Koh, Hong Soo Nicholas
    Lai, Yan Ting
    Kang, Jiawen
    Li, Xuandi
    Ng, Jer Shyuan
    Jiang, Hongchao
    Lim, Wei Yang Bryan
    Xiong, Zehui
    Niyato, Dusit
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 5032 - 5035
  • [27] A Communication-efficient Linearly Convergent Algorithm with Variance Reduction for Distributed Stochastic Optimization
    Lei, Jinlong
    Yi, Peng
    Chen, Jie
    Hong, Yiguang
    2020 EUROPEAN CONTROL CONFERENCE (ECC 2020), 2020, : 1250 - 1255
  • [28] Private and Communication-Efficient Algorithms for Entropy Estimation
    Bravo-Hermsdorff, Gecia
    Busa-Fekete, Robert
    Ghavamzadeh, Mohammad
    Medina, Andres Munoz
    Syed, Umar
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [29] Communication-Efficient Edge AI: Algorithms and Systems
    Shi, Yuanming
    Yang, Kai
    Jiang, Tao
    Zhang, Jun
    Letaief, Khaled B.
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2020, 22 (04): : 2167 - 2191
  • [30] FedBoost: Communication-Efficient Algorithms for Federated Learning
    Hamer, Jenny
    Mohri, Mehryar
    Suresh, Ananda Theertha
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119