Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning

被引:10
|
作者
Hu, Jiahui [1 ,2 ]
Wang, Zhibo [1 ,2 ]
Shen, Yongsheng [3 ]
Lin, Bohan [1 ,2 ]
Sun, Peng [4 ]
Pang, Xiaoyi [5 ]
Liu, Jian [1 ,2 ]
Ren, Kui [1 ,2 ]
机构
[1] Zhejiang Univ, Sch Cyber Sci & Technol, Hangzhou 310027, Peoples R China
[2] ZJU Hangzhou Global Sci & Technol Innovat Ctr, Hangzhou 311215, Peoples R China
[3] Hangzhou City Brain Co Ltd, Hangzhou 310027, Peoples R China
[4] Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410082, Peoples R China
[5] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; data privacy; gradient leakage attack; differential privacy;
D O I
10.1109/TNET.2023.3317870
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) requires frequent uploading and updating of model parameters, which is naturally vulnerable to gradient leakage attacks (GLAs) that reconstruct private training data through gradients. Although some works incorporate differential privacy (DP) into FL to mitigate such privacy issues, their performance is not satisfactory since they did not notice that GLA incurs heterogeneous risks of privacy leakage (RoPL) with respect to gradients from different communication rounds and clients. In this paper, we propose an Adaptive Privacy-Preserving Federated Learning (Adp-PPFL) framework to achieve satisfactory privacy protection against GLA, while ensuring good performance in terms of model accuracy and convergence speed. Specifically, a leakage risk-aware privacy decomposition mechanism is proposed to provide adaptive privacy protection to different communication rounds and clients by dynamically allocating the privacy budget according to the quantified RoPL. In particular, we exploratively design a round-level and a client-level RoPL quantification method to measure the possible risks of GLA breaking privacy from gradients in different communication rounds and clients respectively, which only employ the limited information in general FL settings. Furthermore, to improve the FL model training performance (i.e., convergence speed and global model accuracy), we propose an adaptive privacy-preserving local training mechanism that dynamically clips the gradients and decays the noises added to the clipped gradients during the local training process. Extensive experiments show that our framework outperforms the existing differentially private FL schemes on model accuracy, convergence, and attack resistance.
引用
收藏
页码:1407 / 1422
页数:16
相关论文
共 50 条
  • [21] ShieldFL: Mitigating Model Poisoning Attacks in Privacy-Preserving Federated Learning
    Ma, Zhuoran
    Ma, Jianfeng
    Miao, Yinbin
    Li, Yingjiu
    Deng, Robert H.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1639 - 1654
  • [22] Efficient and Privacy-Preserving Federated Learning Against Poisoning Adversaries
    Zhao, Jiaqi
    Zhu, Hui
    Wang, Fengwei
    Zheng, Yandong
    Lu, Rongxing
    Li, Hui
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (05) : 2320 - 2333
  • [23] Privacy-Preserving Personalized Federated Learning
    Hu, Rui
    Guo, Yuanxiong
    Li, Hongning
    Pei, Qingqi
    Gong, Yanmin
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [24] Frameworks for Privacy-Preserving Federated Learning
    Phong, Le Trieu
    Phuong, Tran Thi
    Wang, Lihua
    Ozawa, Seiichi
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2024, E107D (01) : 2 - 12
  • [25] Privacy-preserving Techniques in Federated Learning
    Liu Y.-X.
    Chen H.
    Liu Y.-H.
    Li C.-P.
    Ruan Jian Xue Bao/Journal of Software, 2022, 33 (03): : 1057 - 1092
  • [26] Federated learning for privacy-preserving AI
    Cheng, Yong
    Liu, Yang
    Chen, Tianjian
    Yang, Qiang
    COMMUNICATIONS OF THE ACM, 2020, 63 (12) : 33 - 36
  • [27] Privacy-Preserving and Reliable Federated Learning
    Lu, Yi
    Zhang, Lei
    Wang, Lulu
    Gao, Yuanyuan
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT III, 2022, 13157 : 346 - 361
  • [28] Gradient leakage attacks in federated learning
    Gong, Haimei
    Jiang, Liangjun
    Liu, Xiaoyang
    Wang, Yuanqi
    Gastro, Omary
    Wang, Lei
    Zhang, Ke
    Guo, Zhen
    ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (SUPPL 1) : 1337 - 1374
  • [29] Gradient leakage attacks in federated learning
    Haimei Gong
    Liangjun Jiang
    Xiaoyang Liu
    Yuanqi Wang
    Omary Gastro
    Lei Wang
    Ke Zhang
    Zhen Guo
    Artificial Intelligence Review, 2023, 56 : 1337 - 1374
  • [30] Privacy-Preserving Federated Learning Against Label-Flipping Attacks on Non-IID Data
    Shen, Xicong
    Liu, Ying
    Li, Fu
    Li, Chunguang
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (01): : 1241 - 1255