Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning

被引:10
|
作者
Hu, Jiahui [1 ,2 ]
Wang, Zhibo [1 ,2 ]
Shen, Yongsheng [3 ]
Lin, Bohan [1 ,2 ]
Sun, Peng [4 ]
Pang, Xiaoyi [5 ]
Liu, Jian [1 ,2 ]
Ren, Kui [1 ,2 ]
机构
[1] Zhejiang Univ, Sch Cyber Sci & Technol, Hangzhou 310027, Peoples R China
[2] ZJU Hangzhou Global Sci & Technol Innovat Ctr, Hangzhou 311215, Peoples R China
[3] Hangzhou City Brain Co Ltd, Hangzhou 310027, Peoples R China
[4] Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410082, Peoples R China
[5] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; data privacy; gradient leakage attack; differential privacy;
D O I
10.1109/TNET.2023.3317870
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) requires frequent uploading and updating of model parameters, which is naturally vulnerable to gradient leakage attacks (GLAs) that reconstruct private training data through gradients. Although some works incorporate differential privacy (DP) into FL to mitigate such privacy issues, their performance is not satisfactory since they did not notice that GLA incurs heterogeneous risks of privacy leakage (RoPL) with respect to gradients from different communication rounds and clients. In this paper, we propose an Adaptive Privacy-Preserving Federated Learning (Adp-PPFL) framework to achieve satisfactory privacy protection against GLA, while ensuring good performance in terms of model accuracy and convergence speed. Specifically, a leakage risk-aware privacy decomposition mechanism is proposed to provide adaptive privacy protection to different communication rounds and clients by dynamically allocating the privacy budget according to the quantified RoPL. In particular, we exploratively design a round-level and a client-level RoPL quantification method to measure the possible risks of GLA breaking privacy from gradients in different communication rounds and clients respectively, which only employ the limited information in general FL settings. Furthermore, to improve the FL model training performance (i.e., convergence speed and global model accuracy), we propose an adaptive privacy-preserving local training mechanism that dynamically clips the gradients and decays the noises added to the clipped gradients during the local training process. Extensive experiments show that our framework outperforms the existing differentially private FL schemes on model accuracy, convergence, and attack resistance.
引用
收藏
页码:1407 / 1422
页数:16
相关论文
共 50 条
  • [31] Split Aggregation: Lightweight Privacy-Preserving Federated Learning Resistant to Byzantine Attacks
    Lu, Zhi
    Lu, SongFeng
    Cui, YongQuan
    Tang, XueMing
    Wu, JunJun
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 5575 - 5590
  • [32] TPFL: Privacy-preserving personalized federated learning mitigates model poisoning attacks
    Zuo, Shaojun
    Xie, Yong
    Yao, Hehua
    Ke, Zhijie
    INFORMATION SCIENCES, 2025, 702
  • [33] A privacy-preserving approach for detecting smishing attacks using federated deep learning
    Mohamed Abdelkarim Remmide
    Fatima Boumahdi
    Bousmaha Ilhem
    Narhimene Boustia
    International Journal of Information Technology, 2025, 17 (1) : 547 - 553
  • [34] Privacy-Preserving Backdoor Attacks Mitigation in Federated Learning Using Functional Encryption
    Olagunju, Funminiyi
    Adom, Isaac
    Mahmoud, Nabil Mahmoud
    SOUTHEASTCON 2024, 2024, : 531 - 539
  • [35] APFed: Anti-Poisoning Attacks in Privacy-Preserving Heterogeneous Federated Learning
    Chen, Xiao
    Yu, Haining
    Jia, Xiaohua
    Yu, Xiangzhan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 5749 - 5761
  • [36] A Privacy-Preserving Collaborative Jamming Attacks Detection Framework Using Federated Learning
    El Houda, Zakaria Abou
    Naboulsi, Diala
    Kaddoum, Georges
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (07) : 12153 - 12164
  • [37] FedPass: Privacy-Preserving Vertical Federated Deep Learning with Adaptive Obfuscation
    Gu, Hanlin
    Luo, Jiahuan
    Kang, Yan
    Fan, Lixin
    Yang, Qiang
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 3759 - 3767
  • [38] Adaptive Privacy-Preserving Federated Learning for Fault Diagnosis in Internet of Ships
    Zhang, Zehui
    Guan, Cong
    Chen, Hui
    Yang, Xiangguo
    Gong, Wenfeng
    Yang, Ansheng
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (09) : 6844 - 6854
  • [39] Privacy-Preserving and Reliable Decentralized Federated Learning
    Gao, Yuanyuan
    Zhang, Lei
    Wang, Lulu
    Choo, Kim-Kwang Raymond
    Zhang, Rui
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (04) : 2879 - 2891
  • [40] Privacy-preserving federated learning on lattice quantization
    Zhang, Lingjie
    Zhang, Hai
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2023, 21 (06)