Analysing Utility Loss in Federated Learning with Differential Privacy

被引:0
|
作者
Pustozerova, Anastasia [1 ]
Baumbach, Jan [2 ]
Mayer, Rudolf [1 ]
机构
[1] SBA Res, Vienna, Austria
[2] Univ Hamburg, Inst Computat Syst Biol, Hamburg, Germany
关键词
Federated Learning; Differential Privacy; Output Perturbation; DP-SGD;
D O I
10.1109/TrustCom60117.2023.00167
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning provides the solution when multiple parties want to collaboratively train a machine learning model without directly sharing sensitive data. In Federated Learning, each party trains a machine learning model locally on its private data and sends only the models' weights or updates (gradients) to an aggregator, which averages locally trained models into a new global model with higher effectiveness. However, the machine learning models, which have to be shared during the federated learning process, can still leak sensitive information about their training data through e.g. membership inference attacks. Differential Privacy (DP) can mitigate privacy risks in federated learning by introducing noise into machine learning models. In this work, we consider two approaches for achieving Differential Privacy in federated learning: (i) output perturbation of the trained machine learning models and (ii) a differentially-private form of stochastic gradient descent (DP-SGD). We perform an extensive analysis of these two approaches in several federated settings and compare their performance in terms of model utility and achieved privacy. We observe that DP-SGD allows for a better trade-off between privacy and utility.
引用
收藏
页码:1230 / 1235
页数:6
相关论文
共 50 条
  • [1] Utility Optimization of Federated Learning with Differential Privacy
    Zhao, Jianzhe
    Mao, Keming
    Huang, Chenxi
    Zeng, Yuyang
    DISCRETE DYNAMICS IN NATURE AND SOCIETY, 2021, 2021
  • [2] FEDERATED LEARNING WITH LOCAL DIFFERENTIAL PRIVACY: TRADE-OFFS BETWEEN PRIVACY, UTILITY, AND COMMUNICATION
    Kim, Muah
    Guenlue, Onur
    Schaefer, Rafael F.
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2650 - 2654
  • [3] Heterogeneous Differential-Private Federated Learning: Trading Privacy for Utility Truthfully
    Lin, Xi
    Wu, Jun
    Li, Jianhua
    Sang, Chao
    Hu, Shiyan
    Deen, M. Jamal
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 5113 - 5129
  • [4] Personalized Federated Learning With Differential Privacy
    Hu, Rui
    Guo, Yuanxiong
    Li, Hongning
    Pei, Qingqi
    Gong, Yanmin
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (10) : 9530 - 9539
  • [5] Local Differential Privacy for Federated Learning
    Arachchige, Pathum Chamikara Mahawaga
    Liu, Dongxi
    Camtepe, Seyit
    Nepal, Surya
    Grobler, Marthie
    Bertok, Peter
    Khalil, Ibrahim
    COMPUTER SECURITY - ESORICS 2022, PT I, 2022, 13554 : 195 - 216
  • [6] Federated Learning with Bayesian Differential Privacy
    Triastcyn, Aleksei
    Faltings, Boi
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 2587 - 2596
  • [7] Evaluating Differential Privacy in Federated Continual Learning
    Ouyang, Junyan
    Han, Rui
    Liu, Chi Harold
    2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,
  • [8] Vertically Federated Learning with Correlated Differential Privacy
    Zhao, Jianzhe
    Wang, Jiayi
    Li, Zhaocheng
    Yuan, Weiting
    Matwin, Stan
    ELECTRONICS, 2022, 11 (23)
  • [9] Decentralized Wireless Federated Learning With Differential Privacy
    Chen, Shuzhen
    Yu, Dongxiao
    Zou, Yifei
    Yu, Jiguo
    Cheng, Xiuzhen
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (09) : 6273 - 6282
  • [10] Enhancing Differential Privacy for Federated Learning at Scale
    Baek, Chunghun
    Kim, Sungwook
    Nam, Dongkyun
    Park, Jihoon
    IEEE ACCESS, 2021, 9 : 148090 - 148103