LayerFED: Speeding Up Federated Learning with Model Split

被引:0
|
作者
Hu, Mingda [1 ]
Wang, Xiong [2 ]
Zhang, Jingjing [1 ]
机构
[1] Fudan Univ, Sch Informat Sci & Technol, Shanghai 200433, Peoples R China
[2] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; model split; communication efficiency; system heterogeneity;
D O I
10.1109/Satellite59115.2023.00012
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning is increasingly used in edge devices with limited computational resources for tasks such as face recognition, object detection, and voice recognition. Federated Learning (FL) is a promising approach to train models on multiple edge devices without requiring clients to upload their original data to the server. However, challenges such as redundant local parameters during synchronous aggregation and system heterogeneity can significantly impact training performance. To address these challenges, we propose LayerFED, a novel strategy that leverages model splitting and pipelined communication-computation mode. LayerFED enables partial and full updates by splitting the model, and mitigates communication channel congestion during server aggregation by selectively updating parameters during computation. This reduces the amount of information that needs to be communicated between edge devices and the server. We demonstrate through experiments on benchmark datasets that LayerFED improves training time efficiency and accuracy while maintaining model performance.
引用
收藏
页码:19 / 24
页数:6
相关论文
共 50 条
  • [21] Speeding Up and Boosting Diverse Density Learning
    Foulds, James R.
    Frank, Eibe
    DISCOVERY SCIENCE, DS 2010, 2010, 6332 : 102 - 116
  • [22] Speeding up Deep Learning with Transient Servers
    Li, Shijian
    Walls, Robert J.
    Xu, Lijie
    Guo, Tian
    2019 IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING (ICAC 2019), 2019, : 125 - 135
  • [23] DFL: Dynamic Federated Split Learning in Heterogeneous IoT
    Samikwa, Eric
    Di Maio, Antonio
    Braun, Torsten
    IEEE Transactions on Machine Learning in Communications and Networking, 2024, 2 : 733 - 752
  • [24] Federated Split Learning via Mutual Knowledge Distillation
    Luo, Linjun
    Zhang, Xinglin
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (03): : 2729 - 2741
  • [25] Speeding up learning: Accelerated distance learning in rehabilitation education
    Harley, DA
    Jolivette, K
    McNall, R
    ASSISTIVE TECHNOLOGY, 2004, 16 (02) : 124 - 134
  • [26] On In-network learning. A Comparative Study with Federated and Split Learning
    Moldoveanu, Matei
    Zaidi, Abdellatif
    SPAWC 2021: 2021 IEEE 22ND INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (IEEE SPAWC 2021), 2020, : 221 - 225
  • [27] Effectively Heterogeneous Federated Learning: A Pairing and Split Learning Based Approach
    Shen, Jinglong
    Wang, Xiucheng
    Cheng, Nan
    Ma, Longfei
    Zhou, Conghao
    Zhang, Yuan
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5847 - 5852
  • [28] Wireless Distributed Learning: A New Hybrid Split and Federated Learning Approach
    Liu, Xiaolan
    Deng, Yansha
    Mahmoodi, Toktam
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (04) : 2650 - 2665
  • [29] Speeding up logistic model tree induction
    Sumner, M
    Frank, E
    Hall, M
    KNOWLEDGE DISCOVERY IN DATABASES: PKDD 2005, 2005, 3721 : 675 - 683
  • [30] Speeding Up Distributed Machine Learning Using Codes
    Lee, Kangwook
    Lam, Maximilian
    Pedarsani, Ramtin
    Papailiopoulos, Dimitris
    Ramchandran, Kannan
    2016 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, 2016, : 1143 - 1147