SAFARI: Sparsity-Enabled Federated Learning With Limited and Unreliable Communications

被引:10
|
作者
Mao, Yuzhu [1 ]
Zhao, Zihao [1 ]
Yang, Meilin [1 ]
Liang, Le [2 ,3 ]
Liu, Yang [4 ,5 ]
Ding, Wenbo [1 ,5 ,6 ]
Lan, Tian [7 ]
Zhang, Xiao-Ping [1 ,8 ]
机构
[1] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Tsinghua Berkeley Shenzhen Inst, Beijing 100190, Peoples R China
[2] Southeast Univ, Natl Mobile Commun Res Lab, Nanjing, Peoples R China
[3] Purple Mt Labs, Nanjing 211111, Peoples R China
[4] Tsinghua Univ, Inst AI Ind Res AIR, Beijing 100190, Peoples R China
[5] Shanghai AI Lab, Shanghai 200241, Peoples R China
[6] RISC V Int Open Source Lab, Shenzhen 518055, Peoples R China
[7] George Washington Univ, Dept Elect & Comp Engn, Washington, DC 20052 USA
[8] Ryerson Univ, Dept Elect Comp & Biomed Engn, Toronto, ON M5B 2K3, Canada
基金
国家重点研发计划;
关键词
Training; Reliability; Computational modeling; Servers; Convergence; Data models; Federated learning; Distributed networks; federated learning; unreliable communication; model sparsification;
D O I
10.1109/TMC.2023.3296624
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) enables edge devices to collaboratively learn a model in a distributed fashion. Many existing researches have focused on improving communication efficiency of high-dimensional models and addressing bias caused by local updates. However, most FL algorithms are either based on reliable communications or assuming fixed and known unreliability characteristics. In practice, networks could suffer from dynamic channel conditions and non-deterministic disruptions, with time-varying and unknown characteristics. To this end, in this paper we propose a sparsity-enabled FL framework with both improved communication efficiency and bias reduction, termed as SAFARI. It makes use of similarity among client models to rectify and compensate for bias that results from unreliable communications. More precisely, sparse learning is implemented on local clients to mitigate communication overhead, while to cope with unreliable communications, a similarity-based compensation method is proposed to provide surrogates for missing model updates. With respect to sparse models, we analyze SAFARI under bounded dissimilarity. It is demonstrated that SAFARI under unreliable communications is guaranteed to converge at the same rate as the standard FedAvg with perfect communications. Implementations and evaluations on the CIFAR-10 dataset validate the effectiveness of SAFARI by showing that it can achieve the same convergence speed and accuracy as FedAvg with perfect communications, with up to 60% of the model weights being pruned and a high percentage of client updates missing in each round of model updates.
引用
收藏
页码:4819 / 4831
页数:13
相关论文
共 50 条
  • [31] Adaptive Channel Sparsity for Federated Learning under System Heterogeneity
    Liao, Dongping
    Gao, Xitong
    Zhao, Yiren
    Xu, Chengzhong
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 20432 - 20441
  • [32] UAV-Enabled Asynchronous Federated Learning
    Zhai, Zhiyuan
    Yuan, Xiaojun
    Wang, Xin
    Yang, Huiyuan
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2025, 24 (03) : 2358 - 2372
  • [33] Privacy and Efficiency of Communications in Federated Split Learning
    Zhang Z.
    Pinto A.
    Turina V.
    Esposito F.
    Matta I.
    IEEE Transactions on Big Data, 2023, 9 (05): : 1380 - 1391
  • [34] Denoising enabled channel estimation for underwater acoustic communications: A sparsity-aware model-driven learning approach
    Liu S.
    Mou Y.
    Wang X.
    Su D.
    Cheng L.
    Intelligent and Converged Networks, 2023, 4 (01): : 1 - 14
  • [35] Federated Learning in Unreliable and Resource-Constrained Cellular Wireless Networks
    Salehi, Mohammad
    Hossain, Ekram
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2021, 69 (08) : 5136 - 5151
  • [36] Keep It Simple: Fault Tolerance Evaluation of Federated Learning with Unreliable Clients
    Huang, Victoria
    Sohail, Shaleeza
    Mayo, Michael
    Botran, Tania Lorido
    Rodrigues, Mark
    Anderson, Chris
    Ooi, Melanie
    2023 IEEE 16TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING, CLOUD, 2023, : 141 - 143
  • [37] DDoS Attacks in Communication: Analysis and Mitigation of Unreliable Clients in Federated Learning
    Sanon, Sogo Pierre
    Reddy, Rekha
    Lipps, Christoph
    Schotten, Hans Dieter
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 986 - 989
  • [38] Personalized Federated Learning Based on Sparsity Regularized Bi⁃level Optimization
    Liu, Xi
    Liu, Bo
    Ji, Fanfan
    Yuan, Xiaotong
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2024, 37 (05): : 447 - 458
  • [39] On the Decentralization of Blockchain-enabled Asynchronous Federated Learning
    Wilhelmi, Francesc
    Guerra, Elia
    Dini, Paolo
    2023 IEEE 9TH INTERNATIONAL CONFERENCE ON NETWORK SOFTWARIZATION, NETSOFT, 2023, : 408 - 413
  • [40] Design and Implementation of Kubernetes enabled Federated Learning Platform
    Kim, Jingyeom
    Kim, Doyeon
    Lee, Joohyung
    12TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE (ICTC 2021): BEYOND THE PANDEMIC ERA WITH ICT CONVERGENCE INNOVATION, 2021, : 410 - 412