Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning

被引:10
|
作者
Hu, Jiahui [1 ,2 ]
Wang, Zhibo [1 ,2 ]
Shen, Yongsheng [3 ]
Lin, Bohan [1 ,2 ]
Sun, Peng [4 ]
Pang, Xiaoyi [5 ]
Liu, Jian [1 ,2 ]
Ren, Kui [1 ,2 ]
机构
[1] Zhejiang Univ, Sch Cyber Sci & Technol, Hangzhou 310027, Peoples R China
[2] ZJU Hangzhou Global Sci & Technol Innovat Ctr, Hangzhou 311215, Peoples R China
[3] Hangzhou City Brain Co Ltd, Hangzhou 310027, Peoples R China
[4] Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410082, Peoples R China
[5] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; data privacy; gradient leakage attack; differential privacy;
D O I
10.1109/TNET.2023.3317870
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) requires frequent uploading and updating of model parameters, which is naturally vulnerable to gradient leakage attacks (GLAs) that reconstruct private training data through gradients. Although some works incorporate differential privacy (DP) into FL to mitigate such privacy issues, their performance is not satisfactory since they did not notice that GLA incurs heterogeneous risks of privacy leakage (RoPL) with respect to gradients from different communication rounds and clients. In this paper, we propose an Adaptive Privacy-Preserving Federated Learning (Adp-PPFL) framework to achieve satisfactory privacy protection against GLA, while ensuring good performance in terms of model accuracy and convergence speed. Specifically, a leakage risk-aware privacy decomposition mechanism is proposed to provide adaptive privacy protection to different communication rounds and clients by dynamically allocating the privacy budget according to the quantified RoPL. In particular, we exploratively design a round-level and a client-level RoPL quantification method to measure the possible risks of GLA breaking privacy from gradients in different communication rounds and clients respectively, which only employ the limited information in general FL settings. Furthermore, to improve the FL model training performance (i.e., convergence speed and global model accuracy), we propose an adaptive privacy-preserving local training mechanism that dynamically clips the gradients and decays the noises added to the clipped gradients during the local training process. Extensive experiments show that our framework outperforms the existing differentially private FL schemes on model accuracy, convergence, and attack resistance.
引用
收藏
页码:1407 / 1422
页数:16
相关论文
共 50 条
  • [41] Privacy-preserving Heterogeneous Federated Transfer Learning
    Gao, Dashan
    Liu, Yang
    Huang, Anbu
    Ju, Ce
    Yu, Han
    Yang, Qiang
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 2552 - 2559
  • [42] A Personalized Privacy-Preserving Scheme for Federated Learning
    Li, Zhenyu
    2022 IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, BIG DATA AND ALGORITHMS (EEBDA), 2022, : 1352 - 1356
  • [43] Privacy-preserving federated learning for radiotherapy applications
    Hayati, H.
    Heijmans, S.
    Persoon, L.
    Murguia, C.
    van de Wouw, N.
    RADIOTHERAPY AND ONCOLOGY, 2023, 182 : S238 - S240
  • [44] POSTER: Privacy-preserving Federated Active Learning
    Kurniawan, Hendra
    Mambo, Masahiro
    SCIENCE OF CYBER SECURITY, SCISEC 2022 WORKSHOPS, 2022, 1680 : 223 - 226
  • [45] AddShare: A Privacy-Preserving Approach for Federated Learning
    Asare, Bernard Atiemo
    Branco, Paula
    Kiringa, Iluju
    Yeap, Tet
    COMPUTER SECURITY. ESORICS 2023 INTERNATIONAL WORKSHOPS, PT I, 2024, 14398 : 299 - 309
  • [46] A Syntactic Approach for Privacy-Preserving Federated Learning
    Choudhury, Olivia
    Gkoulalas-Divanis, Aris
    Salonidis, Theodoros
    Sylla, Issa
    Park, Yoonyoung
    Hsu, Grace
    Das, Amar
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1762 - 1769
  • [47] PPFLV: privacy-preserving federated learning with verifiability
    Zhou, Qun
    Shen, Wenting
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (09): : 12727 - 12743
  • [48] Contribution Measurement in Privacy-Preserving Federated Learning
    Hsu, Ruei-hau
    Yu, Yi-an
    Su, Hsuan-cheng
    JOURNAL OF INFORMATION SCIENCE AND ENGINEERING, 2024, 40 (06) : 1173 - 1196
  • [49] Privacy-Preserving Federated Learning in Fog Computing
    Zhou, Chunyi
    Fu, Anmin
    Yu, Shui
    Yang, Wei
    Wang, Huaqun
    Zhang, Yuqing
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (11): : 10782 - 10793
  • [50] Federated Learning for Privacy-Preserving Speaker Recognition
    Woubie, Abraham
    Backstrom, Tom
    IEEE ACCESS, 2021, 9 : 149477 - 149485