A Joint Communication and Learning Framework for Hierarchical Split Federated Learning

被引:8
|
作者
Khan, Latif U. [1 ]
Guizani, Mohsen [1 ]
Al-Fuqaha, Ala [2 ]
Hong, Choong Seon [3 ]
Niyato, Dusit [4 ]
Han, Zhu [3 ,5 ,6 ]
机构
[1] Mohamed Bin Zayed Univ Artificial Intelligence, Machine Learning Dept, Abu Dhabi, U Arab Emirates
[2] Hamad Bin Khalifa Univ, Coll Engn & Appl Sci, Comp Sci Dept, Doha, Qatar
[3] Kyung Hee Univ, Dept Comp Sci & Engn, Yongin 17104, South Korea
[4] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
[5] Univ Houston, Elect & Comp Engn Dept, Houston, TX 77004 USA
[6] Univ Houston, Comp Sci Dept, Houston, TX 77004 USA
关键词
Federated learning (FL); hierarchical FL; Internet of Things (IoT); split learning; NETWORKS;
D O I
10.1109/JIOT.2023.3315673
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In contrast to methods relying on a centralized training, emerging Internet of Things (IoT) applications can employ federated learning (FL) to train a variety of models for performance improvement and improved privacy preservation. FL calls for the distributed training of local models at end-devices, which uses a lot of processing power (i.e., CPU cycles/sec). Most end-devices have computing power limitations, such as IoT temperature sensors. One solution for this problem is split FL. However, split FL has its problems, including a single point of failure, issues with fairness, and a poor convergence rate. We provide a novel framework, called hierarchical split FL (HSFL), to overcome these issues. On grouping, our HSFL framework is built. Partial models are constructed within each group at the devices, with the remaining work done at the edge servers. Each group then performs local aggregation at the edge following the computation of local models. End devices are given access to such an edge aggregated model so they can update their models. For each group, a unique edge aggregated HSFL model is produced by this procedure after a set number of rounds. Shared among edge servers, these edge aggregated HSFL models are then aggregated to produce a global model. Additionally, we propose an optimization problem that takes into account the relative local accuracy (RLA) of devices, transmission latency, transmission energy, and edge servers' compute latency in order to reduce the cost of HSFL. The formulated problem is a mixed-integer nonlinear programming (MINLP) problem and cannot be solved easily. To tackle this challenge, we perform decomposition of the formulated problem to yield subproblems. These subproblems are edge computing resource allocation problem and joint RLA minimization, wireless resource allocation, task offloading, and transmit power allocation subproblem. Due to the convex nature of edge computing, resource allocation is done so utilizing a convex optimizer, as opposed to a block successive upper-bound minimization (BSUM)-based approach for joint RLA minimization, resource allocation, job offloading, and transmit power allocation. Finally, we present the performance evaluation findings for the proposed HSFL scheme.
引用
收藏
页码:268 / 282
页数:15
相关论文
共 50 条
  • [41] Federated Learning Communication-Efficiency Framework via Corset Construction
    Li, Kaiju
    Wang, Hao
    COMPUTER JOURNAL, 2023, 66 (09): : 2077 - 2101
  • [42] COMMUNICATION-EFFICIENT ONLINE FEDERATED LEARNING FRAMEWORK FOR NONLINEAR REGRESSION
    Gogineni, Vinay Chakravarthi
    Werner, Stefan
    Huang, Yih-Fang
    Kuh, Anthony
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 5228 - 5232
  • [43] DeepReduce: A Sparse-tensor Communication Framework for Federated Deep Learning
    Xu, Hang
    Kostopoulou, Kelly
    Dutta, Aritra
    Li, Xin
    Ntoulas, Alexandros
    Kalnis, Panos
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [44] Clustered Hierarchical Distributed Federated Learning
    Gou, Yan
    Wang, Ruiyu
    Li, Zongyao
    Imran, Muhammad Ali
    Zhang, Lei
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 177 - 182
  • [45] Time Minimization in Hierarchical Federated Learning
    Liu, Chang
    Chua, Terence Jie
    Zhao, Jun
    2022 IEEE/ACM 7TH SYMPOSIUM ON EDGE COMPUTING (SEC 2022), 2022, : 96 - 106
  • [46] A Hierarchical Incentive Mechanism for Federated Learning
    Huang, Jiwei
    Ma, Bowen
    Wu, Yuan
    Chen, Ying
    Shen, Xuemin
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 12731 - 12747
  • [47] Client Selection in Hierarchical Federated Learning
    Trindade, Silvana
    da Fonseca, Nelson L. S.
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (17): : 28480 - 28495
  • [48] FedACA: An Adaptive Communication-Efficient Asynchronous Framework for Federated Learning
    Zhou, Shuang
    Huo, Yuankai
    Bao, Shunxing
    Landman, Bennett
    Gokhale, Aniruddha
    2022 IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING AND SELF-ORGANIZING SYSTEMS (ACSOS 2022), 2022, : 71 - 80
  • [49] Hierarchical Federated Learning Architectures for the Metaverse
    GU Cheng
    LI Baochun
    ZTE Communications, 2024, 22 (02) : 39 - 48
  • [50] FUSE: a federated learning and U-shape split learning-based electricity theft detection framework
    Xuan LI
    Naiyu WANG
    Liehuang ZHU
    Shuai YUAN
    Zhitao GUAN
    Science China(Information Sciences), 2024, 67 (04) : 339 - 340