Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning

被引:10
|
作者
Hu, Jiahui [1 ,2 ]
Wang, Zhibo [1 ,2 ]
Shen, Yongsheng [3 ]
Lin, Bohan [1 ,2 ]
Sun, Peng [4 ]
Pang, Xiaoyi [5 ]
Liu, Jian [1 ,2 ]
Ren, Kui [1 ,2 ]
机构
[1] Zhejiang Univ, Sch Cyber Sci & Technol, Hangzhou 310027, Peoples R China
[2] ZJU Hangzhou Global Sci & Technol Innovat Ctr, Hangzhou 311215, Peoples R China
[3] Hangzhou City Brain Co Ltd, Hangzhou 310027, Peoples R China
[4] Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410082, Peoples R China
[5] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; data privacy; gradient leakage attack; differential privacy;
D O I
10.1109/TNET.2023.3317870
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) requires frequent uploading and updating of model parameters, which is naturally vulnerable to gradient leakage attacks (GLAs) that reconstruct private training data through gradients. Although some works incorporate differential privacy (DP) into FL to mitigate such privacy issues, their performance is not satisfactory since they did not notice that GLA incurs heterogeneous risks of privacy leakage (RoPL) with respect to gradients from different communication rounds and clients. In this paper, we propose an Adaptive Privacy-Preserving Federated Learning (Adp-PPFL) framework to achieve satisfactory privacy protection against GLA, while ensuring good performance in terms of model accuracy and convergence speed. Specifically, a leakage risk-aware privacy decomposition mechanism is proposed to provide adaptive privacy protection to different communication rounds and clients by dynamically allocating the privacy budget according to the quantified RoPL. In particular, we exploratively design a round-level and a client-level RoPL quantification method to measure the possible risks of GLA breaking privacy from gradients in different communication rounds and clients respectively, which only employ the limited information in general FL settings. Furthermore, to improve the FL model training performance (i.e., convergence speed and global model accuracy), we propose an adaptive privacy-preserving local training mechanism that dynamically clips the gradients and decays the noises added to the clipped gradients during the local training process. Extensive experiments show that our framework outperforms the existing differentially private FL schemes on model accuracy, convergence, and attack resistance.
引用
收藏
页码:1407 / 1422
页数:16
相关论文
共 50 条
  • [1] A survey on privacy-preserving federated learning against poisoning attacks
    Xia, Feng
    Cheng, Wenhao
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (10): : 13565 - 13582
  • [2] Adaptive privacy-preserving federated learning
    Liu, Xiaoyuan
    Li, Hongwei
    Xu, Guowen
    Lu, Rongxing
    He, Miao
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2020, 13 (06) : 2356 - 2366
  • [3] Adaptive privacy-preserving federated learning
    Xiaoyuan Liu
    Hongwei Li
    Guowen Xu
    Rongxing Lu
    Miao He
    Peer-to-Peer Networking and Applications, 2020, 13 : 2356 - 2366
  • [4] A Verifiable Privacy-Preserving Federated Learning Framework Against Collusion Attacks
    Chen, Yange
    He, Suyu
    Wang, Baocang
    Feng, Zhanshen
    Zhu, Guanghui
    Tian, Zhihong
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (05) : 3918 - 3934
  • [5] Efficient Privacy-Preserving Federated Learning Against Inference Attacks for IoT
    Miao, Yifeng
    Chen, Siguang
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [6] DefendFL: A Privacy-Preserving Federated Learning Scheme Against Poisoning Attacks
    Liu, Jiao
    Li, Xinghua
    Liu, Ximeng
    Zhang, Haiyan
    Miao, Yinbin
    Deng, Robert H.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [7] An Adaptive Gradient Privacy-Preserving Algorithm for Federated XGBoost
    Cai, Hongyi
    Cai, Jianping
    Sun, Lan
    2023 2ND ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING, CACML 2023, 2023, : 277 - 282
  • [8] Privacy-Preserving Federated Learning Resistant to Byzantine Attacks
    Mu X.-T.
    Cheng K.
    Song A.-X.
    Zhang T.
    Zhang Z.-W.
    Shen Y.-L.
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (04): : 842 - 861
  • [9] Privacy-Preserving Detection of Poisoning Attacks in Federated Learning
    Muhr, Trent
    Zhang, Wensheng
    2022 19TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY & TRUST (PST), 2022,
  • [10] A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks
    Yazdinejad, Abbas
    Dehghantanha, Ali
    Karimipour, Hadis
    Srivastava, Gautam
    Parizi, Reza M.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6693 - 6708