KDRSFL: A knowledge distillation resistance transfer framework for defending model inversion attacks in split federated learning

被引:0
|
作者
Chen, Renlong [1 ]
Xia, Hui [1 ]
Wang, Kai [2 ]
Xu, Shuo [1 ]
Zhang, Rui [1 ]
机构
[1] Ocean Univ China, Coll Comp Sci & Technol, Qingdao 266100, Peoples R China
[2] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin, Peoples R China
基金
中国国家自然科学基金;
关键词
Split federated learning; Knowledge distillation; Model inversion attacks; Privacy-Preserving Machine Learning; Resistance Transfer; INFORMATION LEAKAGE; PRIVACY; ROBUSTNESS;
D O I
10.1016/j.future.2024.107637
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Split Federated Learning (SFL) enables organizations such as healthcare to collaborate to improve model performance without sharing private data. However, SFL is currently susceptible to model inversion (MI) attacks, which create a serious problem of risk for private data leakage and loss of accuracy. Therefore, this paper proposes an innovative framework called Knowledge Distillation Resistance Transfer for Split Federated Learning (KDRSFL). The KDRSFL framework combines one-shot distillation techniques with adjustment strategies optimized for attackers, aiming to achieve knowledge distillation-based resistance transfer. KDRSFL enhances the classification accuracy of feature extractors and strengthens their resistance to adversarial attacks. First, a teacher model with strong resistance to MI attacks is constructed, and then this capability is transferred to the client models through knowledge distillation. Second, the defense of the client models is further strengthened through attacker-aware training. Finally, the client models achieve effective defense against MI through local training. Detailed experimental validation shows that KDRSFL performs well against MI attacks on the CIFAR100 dataset. KDRSFL achieved a reconstruction mean squared error (MSE) of 0.058 while maintaining a model accuracy of 67.4% for the VGG11 model. KDRSFL represents a 16% improvement in MI attack error rate over ResSFL, with only 0.1% accuracy loss.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] TinyFL_HKD: Enhancing Edge AI Federated Learning With Hierarchical Knowledge Distillation Framework
    Hung, Chung-Wen
    Tsai, Cheng-Yu
    Wang, Chun-Chieh
    Lee, Ching-Hung
    IEEE SENSORS JOURNAL, 2025, 25 (07) : 12038 - 12047
  • [32] OQFL: An Optimized Quantum-Based Federated Learning Framework for Defending Against Adversarial Attacks in Intelligent Transportation Systems
    Yamany, Waleed
    Moustafa, Nour
    Turnbull, Benjamin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (01) : 893 - 903
  • [33] Heterogeneous Federated Learning via Generative Model-Aided Knowledge Distillation in the Edge
    Sun, Chuanneng
    Jiang, Tingcong
    Pompili, Dario
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (05): : 5589 - 5599
  • [34] Model Decomposition and Reassembly for Purified Knowledge Transfer in Personalized Federated Learning
    Zhang, Jie
    Guo, Song
    Ma, Xiaosong
    Xu, Wenchao
    Zhou, Qihua
    Guo, Jingcai
    Hong, Zicong
    Shan, Jun
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (01) : 379 - 393
  • [35] Knowledge Transfer via Compact Model in Federated Learning (Student Abstract)
    Pei, Jiaming
    Li, Wei
    Wang, Lukun
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23621 - 23622
  • [36] FLOW: A Robust Federated Learning Framework to Defend Against Model Poisoning Attacks in IoT
    Liu, Shukan
    Li, Zhenyu
    Sun, Qiao
    Chen, Lin
    Zhang, Xianfeng
    Duan, Li
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (09): : 15075 - 15086
  • [37] FedEKT: Ensemble Knowledge Transfer for Model-Heterogeneous Federated Learning
    Wu, Meihan
    Li, Li
    Chang, Tao
    Qiao, Peng
    Miao, Cui
    Zhou, Jie
    Wang, Jingnan
    Wang, Xiaodong
    2024 IEEE/ACM 32ND INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE, IWQOS, 2024,
  • [38] Federated transfer learning with consensus knowledge distillation for intelligent fault diagnosis under data privacy preserving
    Xue, Xingan
    Zhao, Xiaoping
    Zhang, Yonghong
    Ma, Mengyao
    Bu, Can
    Peng, Peng
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (01)
  • [39] A novel staged training strategy leveraging knowledge distillation and model fusion for heterogeneous federated learning
    Wang, Debao
    Guan, Shaopeng
    Sun, Ruikang
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2025, 236
  • [40] Complementary Knowledge Distillation for Robust and Privacy-Preserving Model Serving in Vertical Federated Learning
    Gao, Dashan
    Wan, Sheng
    Fan, Lixin
    Yao, Xin
    Yang, Qiang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 18, 2024, : 19832 - 19839