FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge Distillation

被引:6
|
作者
Chen, Leiming [1 ]
Zhang, Weishan [1 ]
Dong, Cihao [1 ]
Zhao, Dehai [2 ]
Zeng, Xingjie [3 ]
Qiao, Sibo [4 ]
Zhu, Yichang [1 ]
Tan, Chee Wei [5 ]
机构
[1] China Univ Petr East China, Sch Comp Sci & Technol, Qingdao 266580, Peoples R China
[2] CSIRO Data61, Sydney 2015, Australia
[3] Southwest Petr Univ, Sch Comp Sci, Chengdu 610500, Peoples R China
[4] Tiangong Univ, Sch Software, Tianjin 300387, Peoples R China
[5] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
关键词
heterogeneous federated learning; adaptive knowledge distillation; malicious client identification; trustworthy knowledge aggregation;
D O I
10.3390/e26010096
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based on its environment, making it difficult to perform federated learning in a heterogeneous model environment. Some knowledge distillation methods address the problem of heterogeneous model fusion to some extent. However, these methods assume that each client is trustworthy. Some clients may produce malicious or low-quality knowledge, making it difficult to aggregate trustworthy knowledge in a heterogeneous environment. To address these challenges, we propose a trustworthy heterogeneous federated learning framework (FedTKD) to achieve client identification and trustworthy knowledge fusion. Firstly, we propose a malicious client identification method based on client logit features, which can exclude malicious information in fusing global logit. Then, we propose a selectivity knowledge fusion method to achieve high-quality global logit computation. Additionally, we propose an adaptive knowledge distillation method to improve the accuracy of knowledge transfer from the server side to the client side. Finally, we design different attack and data distribution scenarios to validate our method. The experiment shows that our method outperforms the baseline methods, showing stable performance in all attack scenarios and achieving an accuracy improvement of 2% to 3% in different data distributions.
引用
收藏
页数:31
相关论文
共 50 条
  • [31] A Personalized Federated Learning Method Based on Knowledge Distillation and Differential Privacy
    Jiang, Yingrui
    Zhao, Xuejian
    Li, Hao
    Xue, Yu
    ELECTRONICS, 2024, 13 (17)
  • [32] Personalized Federated Learning Method Based on Collation Game and Knowledge Distillation
    Sun Y.
    Shi Y.
    Li M.
    Yang R.
    Si P.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2023, 45 (10): : 3702 - 3709
  • [33] Knowledge Distillation Based Defense for Audio Trigger Backdoor in Federated Learning
    Chen, Yu-Wen
    Ke, Bo-Hsu
    Chen, Bo-Zhong
    Chiu, Si-Rong
    Tu, Chun-Wei
    Kuo, Jian-Jhih
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 4271 - 4276
  • [34] FedRCIL: Federated Knowledge Distillation for Representation based Contrastive Incremental Learning
    Psaltis, Athanasios
    Chatzikonstantinou, Christos
    Patrikakis, Charalampos Z.
    Daras, Petros
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 3455 - 3464
  • [35] Trustworthy and Fair Federated Learning via Reputation-Based Consensus and Adaptive Incentives
    Rashid, Md Mamunur
    Xiang, Yong
    Uddin, Md Palash
    Tang, Jine
    Sood, Keshav
    Gao, Longxiang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2868 - 2882
  • [36] A novel staged training strategy leveraging knowledge distillation and model fusion for heterogeneous federated learning
    Wang, Debao
    Guan, Shaopeng
    Sun, Ruikang
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2025, 236
  • [37] Knowledge Selection and Local Updating Optimization for Federated Knowledge Distillation With Heterogeneous Models
    Wang, Dong
    Zhang, Naifu
    Tao, Meixia
    Chen, Xu
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2023, 17 (01) : 82 - 97
  • [38] FedGM: Heterogeneous Federated Learning via Generative Learning and Mutual Distillation
    Peng, Chao
    Guo, Yiming
    Chen, Yao
    Rui, Qilin
    Yang, Zhengfeng
    Xu, Chenyang
    EURO-PAR 2023: PARALLEL PROCESSING, 2023, 14100 : 339 - 351
  • [39] Resource-Aware Knowledge Distillation for Federated Learning
    Chen, Zheyi
    Tian, Pu
    Liao, Weixian
    Chen, Xuhui
    Xu, Guobin
    Yu, Wei
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2023, 11 (03) : 706 - 719
  • [40] DECENTRALIZED FEDERATED LEARNING VIA MUTUAL KNOWLEDGE DISTILLATION
    Huang, Yue
    Kong, Lanju
    Li, Qingzhong
    Zhang, Baochen
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 342 - 347