Fast Federated Learning in the Presence of Arbitrary Device Unavailability

被引:0
|
作者
Gu, Xinran [1 ]
Huang, Kaixuan [2 ]
Zhang, Jingzhao [3 ]
Huang, Longbo [1 ]
机构
[1] Tsinghua Univ, IIIS, Beijing, Peoples R China
[2] Princeton Univ, ECE, Princeton, NJ 08544 USA
[3] MIT, EECS, Cambridge, MA 02139 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Learning (FL) coordinates with numerous heterogeneous devices to collaboratively train a shared model while preserving user privacy. Despite its multiple advantages, FL faces new challenges. One challenge arises when devices drop out of the training process beyond the control of the central server. In this case, the convergence of popular FL algorithms such as FedAvg is severely influenced by the straggling devices. To tackle this challenge, we study federated learning algorithms under arbitrary device unavailability and propose an algorithm named Memory-augmented Impatient Federated Averaging (MIFA). Our algorithm efficiently avoids excessive latency induced by inactive devices, and corrects the gradient bias using the memorized latest updates from the devices. We prove that MIFA achieves minimax optimal convergence rates on non-i.i.d. data for both strongly convex and non-convex smooth functions. We also provide an explicit characterization of the improvement over baseline algorithms through a case study, and validate the results by numerical experiments on real-world datasets.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Device Scheduling with Fast Convergence for Wireless Federated Learning
    Shi, Wenqi
    Zhou, Sheng
    Niu, Zhisheng
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [2] FedREM: Guided Federated Learning in the Presence of Dynamic Device Unpredictability
    Lan, Linsi
    Wang, Junbo
    Li, Zhi
    Kant, Krishna
    Liu, Wanquan
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2024, 35 (07) : 1189 - 1206
  • [3] FedAR: Addressing Client Unavailability in Federated Learning with Local Update Approximation and Rectification
    Jiang, Chutian
    Zhou, Hansong
    Zhang, Xiaonan
    Chakraborty, Shayok
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT III, ECML PKDD 2024, 2024, 14943 : 178 - 196
  • [4] Federated Learning under Arbitrary Communication Patterns
    Avdyukhin, Dmitrii
    Kasiviswanathan, Shiva Prasad
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [5] Federated Learning with Downlink Device Selection
    Amiri, Mohammad Mohammadi
    Kulkarni, Sanjeev R.
    Poor, H. Vincent
    SPAWC 2021: 2021 IEEE 22ND INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (IEEE SPAWC 2021), 2020, : 306 - 310
  • [6] Blockchained On-Device Federated Learning
    Kim, Hyesung
    Park, Jihong
    Bennis, Mehdi
    Kim, Seong-Lyun
    IEEE COMMUNICATIONS LETTERS, 2020, 24 (06) : 1279 - 1283
  • [7] Federated Reinforcement Learning For Fast Personalization
    Nadiger, Chetan
    Kumar, Anil
    Abdelhak, Sherine
    2019 IEEE SECOND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE), 2019, : 123 - 127
  • [8] Fast-Convergent Federated Learning
    Nguyen, Hung T.
    Sehwag, Vikash
    Hosseinalipour, Seyyedali
    Brinton, Christopher G.
    Chiang, Mung
    Poor, H. Vincent
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (01) : 201 - 218
  • [9] Arbitrary Profit Sharing in Federated Learning Utility Games
    Georgoulaki, Eirini
    Kollias, Kostas
    ALGORITHMIC GAME THEORY, SAGT 2023, 2023, 14238 : 58 - 70
  • [10] A Unified Analysis of Federated Learning with Arbitrary Client Participation
    Wang, Shiqiang
    Ji, Mingyue
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,