LayerFED: Speeding Up Federated Learning with Model Split

被引:0
|
作者
Hu, Mingda [1 ]
Wang, Xiong [2 ]
Zhang, Jingjing [1 ]
机构
[1] Fudan Univ, Sch Informat Sci & Technol, Shanghai 200433, Peoples R China
[2] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; model split; communication efficiency; system heterogeneity;
D O I
10.1109/Satellite59115.2023.00012
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning is increasingly used in edge devices with limited computational resources for tasks such as face recognition, object detection, and voice recognition. Federated Learning (FL) is a promising approach to train models on multiple edge devices without requiring clients to upload their original data to the server. However, challenges such as redundant local parameters during synchronous aggregation and system heterogeneity can significantly impact training performance. To address these challenges, we propose LayerFED, a novel strategy that leverages model splitting and pipelined communication-computation mode. LayerFED enables partial and full updates by splitting the model, and mitigates communication channel congestion during server aggregation by selectively updating parameters during computation. This reduces the amount of information that needs to be communicated between edge devices and the server. We demonstrate through experiments on benchmark datasets that LayerFED improves training time efficiency and accuracy while maintaining model performance.
引用
收藏
页码:19 / 24
页数:6
相关论文
共 50 条
  • [31] Speeding up Diffraction Analysis using Machine Learning
    Sharma, Hemant
    Park, Jun-Sang
    Kenesei, Peter
    Almer, Jonathan
    Liu, Zhengchun
    Miceli, Antonino
    ACTA CRYSTALLOGRAPHICA A-FOUNDATION AND ADVANCES, 2022, 78 : A141 - A141
  • [32] Speeding Up Distributed Machine Learning Using Codes
    Lee, Kangwook
    Lam, Maximilian
    Pedarsani, Ramtin
    Papailiopoulos, Dimitris
    Ramchandran, Kannan
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2018, 64 (03) : 1514 - 1529
  • [33] Speeding up atomistic structural search with machine learning
    Hammer, Bjork
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2019, 257
  • [34] On the Feasibility of Split Learning, Transfer Learning and Federated Learning for Preserving Security in ITS Systems
    Otoum, Safa
    Guizani, Nadra
    Mouftah, Hussein
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (07) : 7462 - 7470
  • [35] FeSViBS: Federated Split Learning of Vision Transformer with Block Sampling
    Almalik, Faris
    Alkhunaizi, Naif
    Almakky, Ibrahim
    Nandakumar, Karthik
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT II, 2023, 14221 : 350 - 360
  • [36] A Deep Cut Into Split Federated Self-Supervised Learning
    Przewiezlikowski, Marcin
    Osial, Marcin
    Zielinski, Bartosz
    Smieja, Marek
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT II, ECML PKDD 2024, 2024, 14942 : 444 - 459
  • [37] Resource Optimized Hierarchical Split Federated Learning for Wireless Networks
    Khan, Latif U.
    Guizani, Mohsen
    Hong, Choong Seon
    2023 CYBER-PHYSICAL SYSTEMS AND INTERNET-OF-THINGS WEEK, CPS-IOT WEEK WORKSHOPS, 2023, : 254 - 259
  • [38] Adaptive Asynchronous Split Federated Learning for Medical Image Segmentation
    Shiranthika, Chamani
    Hadizadeh, Hadi
    Saeedi, Parvaneh
    Ivan Bajic, V.
    IEEE ACCESS, 2024, 12 : 182496 - 182515
  • [39] Adaptive and Parallel Split Federated Learning in Vehicular Edge Computing
    Qiang, Xianke
    Chang, Zheng
    Hu, Yun
    Liu, Lei
    Hamalainen, Timo
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (05): : 4591 - 4604
  • [40] Core network traffic prediction based on vertical federated learning and split learning
    Pengyu Li
    Chengwei Guo
    Yanxia Xing
    Yingji Shi
    Lei Feng
    Fanqin Zhou
    Scientific Reports, 14