Sybil Attacks and Defense on Differential Privacy based Federated Learning

被引:0
|
作者
Jiang, Yupeng [1 ]
Li, Yong [2 ]
Zhou, Yipeng [1 ]
Zheng, Xi [1 ]
机构
[1] Macquarie Univ, Sydney, NSW, Australia
[2] Changchun Univ Technol, Changchun, Jilin, Peoples R China
基金
澳大利亚研究理事会;
关键词
Federated learning; differential privacy; Sybil attack;
D O I
10.1109/TRUSTCOM53373.2021.00062
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In federated learning, machine learning and deep learning models are trained globally on distributed devices. The state-of-the-art privacy-preserving technique in the context of federated learning is user-level differential privacy. However, such a mechanism is vulnerable to some specific model poisoning attacks such as Sybil attacks. A malicious adversary could create multiple fake clients or collude compromised devices in Sybil attacks to mount direct model updates manipulation. Recent works on novel defense against model poisoning attacks are difficult to detect Sybil attacks when differential privacy is utilized, as it masks clients' model updates with perturbation. In this work, we implement the first Sybil attacks on differential privacy based federated learning architectures and show their impacts on model convergence. We randomly compromise some clients by manipulating different noise levels reflected by the local privacy budget epsilon of differential privacy with Laplace mechanism on the local model updates of these Sybil clients. As a result, the global model convergence rates decrease or even leads to divergence. We apply our attacks to two recent aggregation defense mechanisms, called Krum and Trimmed Mean. Our evaluation results on the MNIST and CIFAR-10 datasets show that our attacks effectively slow down the convergence of the global models. We then propose a method to keep monitoring the average loss of all participants in each round for convergence anomaly detection and defend our Sybil attacks based on the training loss reported from randomly selected sets of clients as the judging panels. Our empirical study demonstrates that our defense effectively mitigates the impact of our Sybil attacks.
引用
收藏
页码:355 / 362
页数:8
相关论文
共 50 条
  • [31] A Survey of Differential Privacy Techniques for Federated Learning
    Wang, Xin
    Li, Jiaqian
    Ding, Xueshuang
    Zhang, Haoji
    Sun, Lianshan
    IEEE ACCESS, 2025, 13 : 6539 - 6555
  • [32] Wireless Federated Learning with Local Differential Privacy
    Seif, Mohamed
    Tandon, Ravi
    Li, Ming
    2020 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2020, : 2604 - 2609
  • [33] Utility Optimization of Federated Learning with Differential Privacy
    Zhao, Jianzhe
    Mao, Keming
    Huang, Chenxi
    Zeng, Yuyang
    DISCRETE DYNAMICS IN NATURE AND SOCIETY, 2021, 2021
  • [34] Differential Privacy for Deep and Federated Learning: A Survey
    El Ouadrhiri, Ahmed
    Abdelhadi, Ahmed
    IEEE ACCESS, 2022, 10 : 22359 - 22380
  • [35] Differential Privacy in HyperNetworks for Personalized Federated Learning
    Nemala, Vaisnavi
    Phung Lai
    NhatHai Phan
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 4224 - 4228
  • [36] Hierarchical federated learning with global differential privacy
    Long, Youqun
    Zhang, Jianhui
    Wang, Gaoli
    Fu, Jie
    ELECTRONIC RESEARCH ARCHIVE, 2023, 31 (07): : 3741 - 3758
  • [37] Personalized Graph Federated Learning With Differential Privacy
    Gauthier F.
    Gogineni V.C.
    Werner S.
    Huang Y.-F.
    Kuh A.
    IEEE Transactions on Signal and Information Processing over Networks, 2023, 9 : 736 - 749
  • [38] Balancing Privacy and Performance: A Differential Privacy Approach in Federated Learning
    Tayyeh, Huda Kadhim
    AL-Jumaili, Ahmed Sabah Ahmed
    COMPUTERS, 2024, 13 (11)
  • [39] Shuffed Model of Differential Privacy in Federated Learning
    Girgis, Antonious M.
    Data, Deepesh
    Diggavi, Suhas
    Kairouz, Peter
    Suresh, Ananda Theertha
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [40] Signal Modulation Recognition Method Based on Differential Privacy Federated Learning
    Shi, Jibo
    Qi, Lin
    Li, Kuixian
    Lin, Yun
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021