Communication-Efficient Network Topology in Decentralized Learning: A Joint Design of Consensus Matrix and Resource Allocation

被引:0
|
作者
Wang, Jingrong [1 ]
Liang, Ben [1 ]
Zhu, Zhongwen [2 ]
Fapi, Emmanuel Thepie [2 ]
Dalal, Hardik [2 ]
机构
[1] Univ Toronto, Dept Elect & Comp Engn, Toronto, ON M5S 3G4, Canada
[2] Ericsson Global Accelerator, Montreal, PQ L4W 5E3, Canada
关键词
Training; Convergence; Sparse matrices; Network topology; Resource management; Computational modeling; Topology; Laplace equations; Costs; Optimization; Distributed machine learning; consensus weight matrix; resource allocation; sparse graph; MIXING MARKOV-CHAIN; GRAPHS;
D O I
10.1109/TNET.2024.3511333
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In decentralized machine learning over a network of workers, each worker updates its local model as a weighted average of its local model and all models received from its neighbors. Efficient consensus weight matrix design and communication resource allocation can increase the training convergence rate and reduce the wall-clock training time. In this paper, we jointly consider these two factors and propose a novel algorithm termed Communication-Efficient Network Topology (CENT), which reduces the latency in each training iteration by removing unnecessary communication links. CENT enforces communication graph sparsity by iteratively updating, with a fixed step size, a trade-off factor between the convergence factor and a weighted graph sparsity. We further extend CENT to one with an adaptive step size (CENT-A), which adjusts the trade-off factor based on the feedback of the objective function value, without introducing additional computation complexity. We show that both CENT and CENT-A preserve the training convergence rate while avoiding the selection of poor communication links. Numerical studies with real-world machine learning data in both homogeneous and heterogeneous scenarios demonstrate the efficacy of CENT and CENT-A and their performance advantage over state-of-the-art algorithms.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Joint Consensus Matrix Design and Resource Allocation for Decentralized Learning
    Wang, Jingrong
    Liang, Ben
    Zhu, Zhongwen
    Fapi, Emmanuel Thepie
    Dalal, Hardik
    2022 IFIP NETWORKING CONFERENCE (IFIP NETWORKING), 2022,
  • [2] Communication-Efficient Design for Quantized Decentralized Federated Learning
    Chen, Li
    Liu, Wei
    Chen, Yunfei
    Wang, Weidong
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 1175 - 1188
  • [3] Communication-Efficient Topologies for Decentralized Learning with O(1) Consensus Rate
    Song, Zhuoqing
    Li, Weijian
    Jin, Kexin
    Shi, Lei
    Yan, Ming
    Yin, Wotao
    Yuan, Kun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [4] Communication-Efficient Decentralized Sparse Bayesian Learning of Joint Sparse Signals
    Khanna, Saurabh
    Murthy, Chandra R.
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2017, 3 (03): : 617 - 630
  • [5] Robust communication-efficient decentralized learning with heterogeneity
    Zhang, Xiao
    Wang, Yangyang
    Chen, Shuzhen
    Wang, Cui
    Yu, Dongxiao
    Cheng, Xiuzhen
    JOURNAL OF SYSTEMS ARCHITECTURE, 2023, 141
  • [6] Communication-efficient Distributed Multi-resource Allocation
    Alam, Syed Eqbal
    Shorten, Robert
    Wirth, Fabian
    Yu, Jia Yuan
    2018 IEEE INTERNATIONAL SMART CITIES CONFERENCE (ISC2), 2018,
  • [7] COMMUNICATION-EFFICIENT WEIGHTED ADMM FOR DECENTRALIZED NETWORK OPTIMIZATION
    Ling, Qing
    Liu, Yaohua
    Shi, Wei
    Tian, Zhi
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 4821 - 4825
  • [8] Communication-efficient and Scalable Decentralized Federated Edge Learning
    Yapp, Austine Zong Han
    Koh, Hong Soo Nicholas
    Lai, Yan Ting
    Kang, Jiawen
    Li, Xuandi
    Ng, Jer Shyuan
    Jiang, Hongchao
    Lim, Wei Yang Bryan
    Xiong, Zehui
    Niyato, Dusit
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 5032 - 5035
  • [9] Joint Age-Based Client Selection and Resource Allocation for Communication-Efficient Federated Learning Over NOMA Networks
    Wu, Bibo
    Fang, Fang
    Wang, Xianbin
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2024, 72 (01) : 179 - 192
  • [10] Communication-Efficient Consensus Mechanism for Federated Reinforcement Learning
    Xu, Xing
    Li, Rongpeng
    Zhao, Zhifeng
    Zhang, Honggang
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 80 - 85