Knowledge-Aware Parameter Coaching for Communication-Efficient Personalized Federated Learning in Mobile Edge Computing

被引:0
|
作者
Zhi, Mingjian [1 ]
Bi, Yuanguo [1 ]
Cai, Lin [2 ]
Xu, Wenchao [3 ]
Wang, Haozhao [4 ]
Xiang, Tianao [1 ]
He, Qiang [5 ]
机构
[1] Northeastem Univ, Sch Comp Sci & Engn, Shenyang 110169, Peoples R China
[2] Univ Victoria, Dept Elect & Comp Engn, Victoria, BC V8W3P6, Canada
[3] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[4] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
[5] Northeastern Univ, Sch Med & Biol Informat Engn, Shenyang 110169, Peoples R China
基金
中国国家自然科学基金;
关键词
Servers; Training; Computational modeling; Adaptation models; Federated learning; Data models; Costs; Communication optimization; federated learning; mobile edge computing; personalization;
D O I
10.1109/TMC.2024.3464512
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Personalized Federated Learning (pFL) can improve the accuracy of local models and provide enhanced edge intelligence without exposing the raw data in Mobile Edge Computing (MEC). However, in the MEC environment with constrained communication resources, transmitting the entire model between the server and the clients in traditional pFL methods imposes substantial communication overhead, which can lead to inaccurate personalization and degraded performance of mobile clients. In response, we propose a Communication-Efficient pFL architecture to enhance the performance of personalized models while minimizing communication overhead in MEC. First, a Knowledge-Aware Parameter Coaching method (KAPC) is presented to produce a more accurate personalized model by utilizing the layer-wise parameters of other clients with adaptive aggregation weights. Then, convergence analysis of the proposed KAPC is developed in both the convex and non-convex settings. Second, a Bidirectional Layer Selection algorithm (BLS) based on self-relationship and generalization error is proposed to select the most informative layers for transmission, which reduces communication costs. Extensive experiments are conducted, and the results demonstrate that the proposed KAPC achieves superior accuracy compared to the state-of-the-art baselines, while the proposed BLS substantially improves resource utilization without sacrificing performance.
引用
收藏
页码:321 / 337
页数:17
相关论文
共 50 条
  • [31] Communication-Efficient Federated Learning for Resource-Constrained Edge Devices
    Lan, Guangchen
    Liu, Xiao-Yang
    Zhang, Yijing
    Wang, Xiaodong
    IEEE Transactions on Machine Learning in Communications and Networking, 2023, 1 : 210 - 224
  • [32] Communication-Efficient Vertical Federated Learning
    Khan, Afsana
    ten Thij, Marijn
    Wilbik, Anna
    ALGORITHMS, 2022, 15 (08)
  • [33] Communication-efficient Federated Learning Framework with Parameter-Ordered Dropout
    Li, Qichen
    Shao, Sujie
    Yang, Chao
    Chen, Jiewei
    Qi, Feng
    Guo, Shaoyong
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 1195 - 1200
  • [34] Communication-Efficient Adaptive Federated Learning
    Wang, Yujia
    Lin, Lu
    Chen, Jinghui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [35] Communication-Efficient Personalized Federated Learning for Digital Twin in Heterogeneous Industrial IoT
    Wang, Zhihan
    Ma, Xiangxue
    Zhang, Haixia
    Yuan, Dongfeng
    2023 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS, 2023, : 237 - 241
  • [36] Communication-efficient Federated Learning for UAV Networks with Knowledge Distillation and Transfer Learning
    Li, Yalong
    Wu, Celimuge
    Du, Zhaoyang
    Zhong, Lei
    Yoshinaga, Tsutomu
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5739 - 5744
  • [37] Learning-efficient Transmission Scheduling for Distributed Knowledge-aware Edge Learning
    Chen, Qi
    Zhang, Zhilian
    Wang, Wei
    Zhang, Zhaoyang
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [38] Personalized client-edge-cloud hierarchical federated learning in mobile edge computing
    Ma, Chunmei
    Li, Xiangqian
    Huang, Baogui
    Li, Guangshun
    Li, Fengyin
    Journal of Cloud Computing, 2024, 13 (01)
  • [39] Communication-Efficient and Attack-Resistant Federated Edge Learning With Dataset Distillation
    Zhou, Yanlin
    Ma, Xiyao
    Wu, Dapeng
    Li, Xiaolin
    IEEE TRANSACTIONS ON CLOUD COMPUTING, 2023, 11 (03) : 2517 - 2528
  • [40] GWPF: Communication-efficient federated learning with Gradient-Wise Parameter Freezing
    Yang, Duo
    Gao, Yunqi
    Hu, Bing
    Jin, A-Long
    Wang, Wei
    You, Yang
    COMPUTER NETWORKS, 2024, 255