PFEDEDIT: Personalized Federated Learning via Automated Model Editing

被引:0
|
作者
Yuan, Haolin [1 ]
Paul, William [2 ]
Aucott, John [3 ]
Burlina, Philippe [2 ]
Cao, Yinzhi [1 ]
机构
[1] Johns Hopkins Univ, Baltimore, MD 21218 USA
[2] Johns Hopkins Appl Phys Lab, Laurel, MD USA
[3] Johns Hopkins Univ, Sch Med, Baltimore, MD USA
来源
基金
美国国家科学基金会;
关键词
D O I
10.1007/978-3-031-72986-7_6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) allows clients to train a deep learning model collaboratively while maintaining their private data locally. One challenging problem facing FL is that the model utility drops significantly once the data distribution gets heterogeneous, or non-i.i.d, among clients. A promising solution is to personalize models for each client, e.g., keeping some layers locally without aggregation, which is thus called personalized FL. However, previous personalized FL often suffer from sub-optimal utility because their choice of layer personalization is based on empirical knowledge and fixed for different datasets and distributions. In this work, we design PFedEdit, the first federated learning framework that leverages automated model editing to optimize the choice of personalization layers and improve model utility under a variety of data distributions including non-i.i.d. The high-level idea of PFedEdit is to assess the effectiveness of every global model layer in improving model utility on local data distribution once edited, and then to apply edits on the top-k most effective layers. Our evaluation shows that PFedEdit outperforms six state-of-the-art approaches on three benchmark datasets by 6% on the model's performance on average, with the largest accuracy improvement being 26.6%. PFedEdit is open-source and available at this repository: https://github.com/Haolin-Yuan/PFedEdit
引用
收藏
页码:91 / 107
页数:17
相关论文
共 50 条
  • [1] Towards Personalized Federated Learning via Heterogeneous Model Reassembly
    Wang, Jiaqi
    Yang, Xingyi
    Cui, Suhan
    Che, Liwei
    Lyu, Lingjuan
    Xu, Dongkuan
    Ma, Fenglong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] Robust Multi-model Personalized Federated Learning via Model Distillation
    Muhammad, Adil
    Lin, Kai
    Gao, Jian
    Chen, Bincai
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT III, 2022, 13157 : 432 - 446
  • [3] Efficient Personalized Federated Learning via Sparse Model-Adaptation
    Chen, Daoyuan
    Yao, Liuyi
    Gao, Dawei
    Ding, Bolin
    Li, Yaliang
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [4] FedMCSA: Personalized federated learning via model components self-attention
    Guo, Qi
    Qi, Yong
    Qi, Saiyu
    Wu, Di
    Li, Qian
    NEUROCOMPUTING, 2023, 560
  • [5] Personalized Federated Learning via Deviation Tracking Representation Learning
    Jang, Jaewon
    Choi, Bong Jun
    38TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN 2024, 2024, : 762 - 766
  • [6] A lightweight and personalized edge federated learning model
    Peiyan Yuan
    Ling Shi
    Xiaoyan Zhao
    Junna Zhang
    Complex & Intelligent Systems, 2024, 10 : 3577 - 3592
  • [7] A lightweight and personalized edge federated learning model
    Yuan, Peiyan
    Shi, Ling
    Zhao, Xiaoyan
    Zhang, Junna
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (03) : 3577 - 3592
  • [8] FedCD: Personalized Federated Learning via Collaborative Distillation
    Ahmad, Sabtain
    Aral, Atakan
    2022 IEEE/ACM 15TH INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING, UCC, 2022, : 189 - 194
  • [9] pFedLHNs: Personalized Federated Learning via Local Hypernetworks
    Yi, Liping
    Shi, Xiaorong
    Wang, Nan
    Xu, Ziyue
    Wang, Gang
    Liu, Xiaoguang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 516 - 528
  • [10] Personalized Federated Learning via Variational Bayesian Inference
    Zhang, Xu
    Li, Yinchuan
    Li, Wenpeng
    Guo, Kaiyang
    Shao, Yunfeng
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,