LayerFED: Speeding Up Federated Learning with Model Split

被引:0
|
作者
Hu, Mingda [1 ]
Wang, Xiong [2 ]
Zhang, Jingjing [1 ]
机构
[1] Fudan Univ, Sch Informat Sci & Technol, Shanghai 200433, Peoples R China
[2] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; model split; communication efficiency; system heterogeneity;
D O I
10.1109/Satellite59115.2023.00012
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning is increasingly used in edge devices with limited computational resources for tasks such as face recognition, object detection, and voice recognition. Federated Learning (FL) is a promising approach to train models on multiple edge devices without requiring clients to upload their original data to the server. However, challenges such as redundant local parameters during synchronous aggregation and system heterogeneity can significantly impact training performance. To address these challenges, we propose LayerFED, a novel strategy that leverages model splitting and pipelined communication-computation mode. LayerFED enables partial and full updates by splitting the model, and mitigates communication channel congestion during server aggregation by selectively updating parameters during computation. This reduces the amount of information that needs to be communicated between edge devices and the server. We demonstrate through experiments on benchmark datasets that LayerFED improves training time efficiency and accuracy while maintaining model performance.
引用
收藏
页码:19 / 24
页数:6
相关论文
共 50 条
  • [41] Resource-Aware Split Federated Learning for Edge Intelligence
    Arouj, Amna
    Abdelmoniem, Ahmed M.
    Alhilal, Ahmad
    You, Linlin
    Wang, Chen
    PROCEEDINGS 2024 IEEE 3RD WORKSHOP ON MACHINE LEARNING ON EDGE IN SENSOR SYSTEMS, SENSYS-ML 2024, 2024, : 15 - 20
  • [42] Split Consensus Federated Learning: An Approach for Distributed Training and Inference
    Tedeschini, Bernardo Camajori
    Brambilla, Mattia
    Nicoli, Monica
    IEEE ACCESS, 2024, 12 : 119535 - 119549
  • [43] Accelerating Split Federated Learning Over Wireless Communication Networks
    Xu, Ce
    Li, Jinxuan
    Liu, Yuan
    Ling, Yushi
    Wen, Miaowen
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (06) : 5587 - 5599
  • [44] End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things
    Gao, Yansong
    Kim, Minki
    Abuadbba, Sharif
    Kim, Yeonjae
    Thapa, Chandra
    Kim, Kyuyeon
    Camtep, Seyit A.
    Kim, Hyoungshick
    Nepal, Surya
    2020 INTERNATIONAL SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS (SRDS 2020), 2020, : 91 - 100
  • [45] Core network traffic prediction based on vertical federated learning and split learning
    Li, Pengyu
    Guo, Chengwei
    Xing, Yanxia
    Shi, Yingji
    Feng, Lei
    Zhou, Fanqin
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [46] Machine learning as a surrogate model for EnergyPLAN: Speeding up energy system optimization at the country level
    Prina, Matteo Giacomo
    Dallapiccola, Mattia
    Moser, David
    Sparber, Wolfram
    ENERGY, 2024, 307
  • [47] Speeding up
    Thomas, S.
    Chemical Engineer, 2001, (722):
  • [48] Joint Optimization of Model Partition and Resource Allocation for Split Federated Learning Over Vehicular Edge Networks
    Wu, Maoqiang
    Yang, Ruibin
    Huang, Xumin
    Wu, Yuan
    Kang, Jiawen
    Xie, Shengli
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (10) : 15860 - 15865
  • [49] EMGAN: Early-Mix-GAN on Extracting Server-Side Model in Split Federated Learning
    Li, Jingtao
    Chen, Xing
    Yang, Li
    Rakin, Adnan Siraj
    Fan, Deliang
    Chakrabarti, Chaitali
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13545 - 13553
  • [50] KDRSFL: A knowledge distillation resistance transfer framework for defending model inversion attacks in split federated learning
    Chen, Renlong
    Xia, Hui
    Wang, Kai
    Xu, Shuo
    Zhang, Rui
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2025, 166