SAFARI: Sparsity-Enabled Federated Learning With Limited and Unreliable Communications

被引:10
|
作者
Mao, Yuzhu [1 ]
Zhao, Zihao [1 ]
Yang, Meilin [1 ]
Liang, Le [2 ,3 ]
Liu, Yang [4 ,5 ]
Ding, Wenbo [1 ,5 ,6 ]
Lan, Tian [7 ]
Zhang, Xiao-Ping [1 ,8 ]
机构
[1] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Tsinghua Berkeley Shenzhen Inst, Beijing 100190, Peoples R China
[2] Southeast Univ, Natl Mobile Commun Res Lab, Nanjing, Peoples R China
[3] Purple Mt Labs, Nanjing 211111, Peoples R China
[4] Tsinghua Univ, Inst AI Ind Res AIR, Beijing 100190, Peoples R China
[5] Shanghai AI Lab, Shanghai 200241, Peoples R China
[6] RISC V Int Open Source Lab, Shenzhen 518055, Peoples R China
[7] George Washington Univ, Dept Elect & Comp Engn, Washington, DC 20052 USA
[8] Ryerson Univ, Dept Elect Comp & Biomed Engn, Toronto, ON M5B 2K3, Canada
基金
国家重点研发计划;
关键词
Training; Reliability; Computational modeling; Servers; Convergence; Data models; Federated learning; Distributed networks; federated learning; unreliable communication; model sparsification;
D O I
10.1109/TMC.2023.3296624
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) enables edge devices to collaboratively learn a model in a distributed fashion. Many existing researches have focused on improving communication efficiency of high-dimensional models and addressing bias caused by local updates. However, most FL algorithms are either based on reliable communications or assuming fixed and known unreliability characteristics. In practice, networks could suffer from dynamic channel conditions and non-deterministic disruptions, with time-varying and unknown characteristics. To this end, in this paper we propose a sparsity-enabled FL framework with both improved communication efficiency and bias reduction, termed as SAFARI. It makes use of similarity among client models to rectify and compensate for bias that results from unreliable communications. More precisely, sparse learning is implemented on local clients to mitigate communication overhead, while to cope with unreliable communications, a similarity-based compensation method is proposed to provide surrogates for missing model updates. With respect to sparse models, we analyze SAFARI under bounded dissimilarity. It is demonstrated that SAFARI under unreliable communications is guaranteed to converge at the same rate as the standard FedAvg with perfect communications. Implementations and evaluations on the CIFAR-10 dataset validate the effectiveness of SAFARI by showing that it can achieve the same convergence speed and accuracy as FedAvg with perfect communications, with up to 60% of the model weights being pruned and a high percentage of client updates missing in each round of model updates.
引用
收藏
页码:4819 / 4831
页数:13
相关论文
共 50 条
  • [21] Sparsity-enabled signal decomposition using tunable Q-factor wavelet transform for fault feature extraction of gearbox
    Cai, Gaigai
    Chen, Xuefeng
    He, Zhengjia
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2013, 41 (1-2) : 34 - 53
  • [22] Addressing unreliable local models in federated learning through unlearning
    Ameen, Muhammad
    Khan, Riaz Ullah
    Wang, Pengfei
    Batool, Sidra
    Alajmi, Masoud
    NEURAL NETWORKS, 2024, 180
  • [23] Unbiased Federated Learning for Heterogeneous Data under Unreliable Links
    Li, Zhidu
    He, Songyang
    Xue, Qing
    Wang, Zhaoning
    Fan, Bo
    Deng, Mingliang
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS, INFOCOM WKSHPS 2024, 2024,
  • [24] Efficient Privacy-Preserving Federated Learning With Unreliable Users
    Li, Yiran
    Li, Hongwei
    Xu, Guowen
    Huang, Xiaoming
    Lu, Rongxing
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11590 - 11603
  • [25] Towards Mobile Federated Learning with Unreliable Participants and Selective Aggregation
    Esteves, Leonardo
    Portugal, David
    Peixoto, Paulo
    Falcao, Gabriel
    APPLIED SCIENCES-BASEL, 2023, 13 (05):
  • [26] A General Solution for Straggler Effect and Unreliable Communication in Federated Learning
    Zang, Tianming
    Zheng, Ce
    Ma, Shiyao
    Sun, Chen
    Chen, Wei
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1194 - 1199
  • [27] Federated Learning With Unreliable Clients: Performance Analysis and Mechanism Design
    Ma, Chuan
    Li, Jun
    Ding, Ming
    Wei, Kang
    Chen, Wen
    Poor, H. Vincent
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (24) : 17308 - 17319
  • [28] UAV-Enabled Covert Federated Learning
    Hou, Xiangwang
    Wang, Jingjing
    Jiang, Chunxiao
    Zhang, Xudong
    Ren, Yong
    Debbah, Merouane
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (10) : 6793 - 6809
  • [29] Cache-Enabled Federated Learning Systems
    Liu, Yuezhou
    Su, Lili
    Joe-Wong, Carlee
    Ioannidis, Stratis
    Yeh, Edmund
    Siew, Marie
    PROCEEDINGS OF THE 2023 INTERNATIONAL SYMPOSIUM ON THEORY, ALGORITHMIC FOUNDATIONS, AND PROTOCOL DESIGN FOR MOBILE NETWORKS AND MOBILE COMPUTING, MOBIHOC 2023, 2023, : 1 - 10
  • [30] Blockchain-enabled Federated Learning: A Survey
    Qu, Youyang
    Uddin, Md Palash
    Gan, Chenquan
    Xiang, Yong
    Gao, Longxiang
    Yearwood, John
    ACM COMPUTING SURVEYS, 2023, 55 (04)