LayerFED: Speeding Up Federated Learning with Model Split

被引:0
|
作者
Hu, Mingda [1 ]
Wang, Xiong [2 ]
Zhang, Jingjing [1 ]
机构
[1] Fudan Univ, Sch Informat Sci & Technol, Shanghai 200433, Peoples R China
[2] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; model split; communication efficiency; system heterogeneity;
D O I
10.1109/Satellite59115.2023.00012
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning is increasingly used in edge devices with limited computational resources for tasks such as face recognition, object detection, and voice recognition. Federated Learning (FL) is a promising approach to train models on multiple edge devices without requiring clients to upload their original data to the server. However, challenges such as redundant local parameters during synchronous aggregation and system heterogeneity can significantly impact training performance. To address these challenges, we propose LayerFED, a novel strategy that leverages model splitting and pipelined communication-computation mode. LayerFED enables partial and full updates by splitting the model, and mitigates communication channel congestion during server aggregation by selectively updating parameters during computation. This reduces the amount of information that needs to be communicated between edge devices and the server. We demonstrate through experiments on benchmark datasets that LayerFED improves training time efficiency and accuracy while maintaining model performance.
引用
收藏
页码:19 / 24
页数:6
相关论文
共 50 条
  • [1] Speeding up Heterogeneous Federated Learning with Sequentially Trained Superclients
    Zaccone, Riccardo
    Rizzardi, Andrea
    Caldarola, Debora
    Ciccone, Marco
    Caputo, Barbara
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 3376 - 3382
  • [2] FedMes: Speeding Up Federated Learning With Multiple Edge Servers
    Han, Dong-Jun
    Choi, Minseok
    Park, Jungwuk
    Moon, Jaekyun
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (12) : 3870 - 3885
  • [3] Split Federated Learning: Speed up Model Training in Resource-Limited Wireless Networks
    Zhang, Songge
    Wu, Wen
    Hu, Penghui
    Li, Shaofeng
    Zhang, Ning
    2023 IEEE 43RD INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS, 2023, : 985 - 986
  • [4] Speeding up static analysis with the split operator
    Arceri, Vincenzo
    Dolcetti, Greta
    Zaffanella, Enea
    INTERNATIONAL JOURNAL ON SOFTWARE TOOLS FOR TECHNOLOGY TRANSFER, 2024, 26 (05) : 573 - 588
  • [5] Speeding up Static Analysis with the Split Operator
    Arceri, Vincenzo
    Dolcetti, Greta
    Zaffanella, Enea
    PROCEEDINGS OF THE 12TH ACM SIGPLAN INTERNATIONAL WORKSHOP ON THE STATE OF THE ART IN PROGRAM ANALYSIS, SOAP 2023, 2023, : 14 - 19
  • [6] Federated or Split? A Performance and Privacy Analysis of Hybrid Split and Federated Learning Architectures
    Turina, Valeria
    Zhang, Zongshun
    Esposito, Flavio
    Matta, Ibrahim
    2021 IEEE 14TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING (CLOUD 2021), 2021, : 250 - 260
  • [7] Beyond Federated Learning for IoT: Efficient Split Learning With Caching and Model Customization
    Chawla, Manisha
    Gupta, Gagan Raj
    Gaddam, Shreyas
    Wadhwa, Manas
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (20): : 32617 - 32630
  • [8] Speeding up team learning
    Edmondson, A
    Bohmer, R
    Pisano, G
    HARVARD BUSINESS REVIEW, 2001, 79 (09) : 125 - +
  • [9] An optimal scheme for speeding the training process of the asynchronous federated learning based on model partition
    Shi, Lei
    Ren, Ying
    Xu, Jing
    Xie, Yibin
    Fang, Chen
    Fan, Yuqi
    COMPUTING, 2025, 107 (02)
  • [10] Latency Minimization for Split Federated Learning
    Guo, Jie
    Xu, Ce
    Ling, Yushi
    Liu, Yuan
    Yu, Qi
    2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,