Knowledge-Aware Parameter Coaching for Communication-Efficient Personalized Federated Learning in Mobile Edge Computing

被引:0
|
作者
Zhi, Mingjian [1 ]
Bi, Yuanguo [1 ]
Cai, Lin [2 ]
Xu, Wenchao [3 ]
Wang, Haozhao [4 ]
Xiang, Tianao [1 ]
He, Qiang [5 ]
机构
[1] Northeastem Univ, Sch Comp Sci & Engn, Shenyang 110169, Peoples R China
[2] Univ Victoria, Dept Elect & Comp Engn, Victoria, BC V8W3P6, Canada
[3] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[4] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
[5] Northeastern Univ, Sch Med & Biol Informat Engn, Shenyang 110169, Peoples R China
基金
中国国家自然科学基金;
关键词
Servers; Training; Computational modeling; Adaptation models; Federated learning; Data models; Costs; Communication optimization; federated learning; mobile edge computing; personalization;
D O I
10.1109/TMC.2024.3464512
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Personalized Federated Learning (pFL) can improve the accuracy of local models and provide enhanced edge intelligence without exposing the raw data in Mobile Edge Computing (MEC). However, in the MEC environment with constrained communication resources, transmitting the entire model between the server and the clients in traditional pFL methods imposes substantial communication overhead, which can lead to inaccurate personalization and degraded performance of mobile clients. In response, we propose a Communication-Efficient pFL architecture to enhance the performance of personalized models while minimizing communication overhead in MEC. First, a Knowledge-Aware Parameter Coaching method (KAPC) is presented to produce a more accurate personalized model by utilizing the layer-wise parameters of other clients with adaptive aggregation weights. Then, convergence analysis of the proposed KAPC is developed in both the convex and non-convex settings. Second, a Bidirectional Layer Selection algorithm (BLS) based on self-relationship and generalization error is proposed to select the most informative layers for transmission, which reduces communication costs. Extensive experiments are conducted, and the results demonstrate that the proposed KAPC achieves superior accuracy compared to the state-of-the-art baselines, while the proposed BLS substantially improves resource utilization without sacrificing performance.
引用
收藏
页码:321 / 337
页数:17
相关论文
共 50 条
  • [21] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [22] Communication-Efficient and Model-Heterogeneous Personalized Federated Learning via Clustered Knowledge Transfer
    Cho, Yae Jee
    Wang, Jianyu
    Chirvolu, Tarun
    Joshi, Gauri
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2023, 17 (01) : 234 - 247
  • [23] Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT
    Mills, Jed
    Hu, Jia
    Min, Geyong
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 5986 - 5994
  • [24] LGCM: A Communication-Efficient Scheme for Federated Learning in Edge Devices
    Saadat, Nafas Gul
    Thahir, Sameer Mohamed
    Kumar, Santhosh G.
    Jereesh, A. S.
    2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
  • [25] PBFL: Communication-Efficient Federated Learning via Parameter Predicting
    Li, Kaiju
    Xiao, Chunhua
    COMPUTER JOURNAL, 2023, 66 (03): : 626 - 642
  • [26] Communication-Efficient and Private Federated Learning with Adaptive Sparsity-Based Pruning on Edge Computing
    Song, Shijin
    Du, Sen
    Song, Yuefeng
    Zhu, Yongxin
    ELECTRONICS, 2024, 13 (17)
  • [27] Communication-Efficient Personalized Federated Learning on Non-IID Data
    Li, Xiangqian
    Ma, Chunmei
    Huang, Baogui
    Li, Guangshun
    2023 19TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN 2023, 2023, : 562 - 569
  • [28] FedTCR: communication-efficient federated learning via taming computing resources
    Kaiju Li
    Hao Wang
    Qinghua Zhang
    Complex & Intelligent Systems, 2023, 9 : 5199 - 5219
  • [29] FedDD: Toward Communication-Efficient Federated Learning With Differential Parameter Dropout
    Feng, Zhiying
    Chen, Xu
    Wu, Qiong
    Wu, Wen
    Zhang, Xiaoxi
    Huang, Qianyi
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) : 5366 - 5384
  • [30] FedTCR: communication-efficient federated learning via taming computing resources
    Li, Kaiju
    Wang, Hao
    Zhang, Qinghua
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (05) : 5199 - 5219