KDRSFL: A knowledge distillation resistance transfer framework for defending model inversion attacks in split federated learning

被引:0
|
作者
Chen, Renlong [1 ]
Xia, Hui [1 ]
Wang, Kai [2 ]
Xu, Shuo [1 ]
Zhang, Rui [1 ]
机构
[1] Ocean Univ China, Coll Comp Sci & Technol, Qingdao 266100, Peoples R China
[2] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin, Peoples R China
基金
中国国家自然科学基金;
关键词
Split federated learning; Knowledge distillation; Model inversion attacks; Privacy-Preserving Machine Learning; Resistance Transfer; INFORMATION LEAKAGE; PRIVACY; ROBUSTNESS;
D O I
10.1016/j.future.2024.107637
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Split Federated Learning (SFL) enables organizations such as healthcare to collaborate to improve model performance without sharing private data. However, SFL is currently susceptible to model inversion (MI) attacks, which create a serious problem of risk for private data leakage and loss of accuracy. Therefore, this paper proposes an innovative framework called Knowledge Distillation Resistance Transfer for Split Federated Learning (KDRSFL). The KDRSFL framework combines one-shot distillation techniques with adjustment strategies optimized for attackers, aiming to achieve knowledge distillation-based resistance transfer. KDRSFL enhances the classification accuracy of feature extractors and strengthens their resistance to adversarial attacks. First, a teacher model with strong resistance to MI attacks is constructed, and then this capability is transferred to the client models through knowledge distillation. Second, the defense of the client models is further strengthened through attacker-aware training. Finally, the client models achieve effective defense against MI through local training. Detailed experimental validation shows that KDRSFL performs well against MI attacks on the CIFAR100 dataset. KDRSFL achieved a reconstruction mean squared error (MSE) of 0.058 while maintaining a model accuracy of 67.4% for the VGG11 model. KDRSFL represents a 16% improvement in MI attack error rate over ResSFL, with only 0.1% accuracy loss.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Heterogeneous Defect Prediction Based on Federated Transfer Learning via Knowledge Distillation
    Wang, Aili
    Zhang, Yutong
    Yan, Yixin
    IEEE ACCESS, 2021, 9 : 29530 - 29540
  • [22] Bearing Faulty Prediction Method Based on Federated Transfer Learning and Knowledge Distillation
    Zhou, Yiqing
    Wang, Jian
    Wang, Zeru
    MACHINES, 2022, 10 (05)
  • [23] FedMEKT: Distillation-based embedding knowledge transfer for multimodal federated learning
    Le, Huy Q.
    Nguyen, Minh N. H.
    Thwal, Chu Myaet
    Qiao, Yu
    Zhang, Chaoning
    Hong, Choong Seon
    NEURAL NETWORKS, 2025, 183
  • [24] FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations
    Wang, Ning
    Xiao, Yang
    Chen, Yimin
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    ASIA CCS'22: PROCEEDINGS OF THE 2022 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2022, : 946 - 958
  • [25] FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
    Zhang, Zaixi
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2545 - 2555
  • [26] FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations
    Wang, Ning
    Xiao, Yang
    Chen, Yimin
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    ASIA CCS 2022 - Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security, 2022, : 946 - 958
  • [27] Split Knowledge Transfer in Learning Under Privileged Information Framework
    Gauraha, Niharika
    Soderdahl, Fabian
    Spjuth, Ola
    CONFORMAL AND PROBABILISTIC PREDICTION AND APPLICATIONS, VOL 105, 2019, 105
  • [28] Romoa: Robust Model Aggregation for the Resistance of Federated Learning to Model Poisoning Attacks
    Mao, Yunlong
    Yuan, Xinyu
    Zhao, Xinyang
    Zhong, Sheng
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 476 - 496
  • [29] Bidirectional domain transfer knowledge distillation for catastrophic forgetting in federated learning with heterogeneous data
    Min, Qi
    Luo, Fei
    Dong, Wenbo
    Gu, Chunhua
    Ding, Weichao
    KNOWLEDGE-BASED SYSTEMS, 2025, 311
  • [30] Label-Only Model Inversion Attacks via Knowledge Transfer
    Ngoc-Bao Nguyen
    Chandrasegaran, Keshigeyan
    Abdollahzadeh, Milad
    Cheung, Ngai-Man
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,