Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning

被引:213
|
作者
Shejwalkar, Virat [1 ]
Houmansadr, Amir [1 ]
机构
[1] Univ Massachusetts Amherst, Amherst, MA 01003 USA
关键词
D O I
10.14722/ndss.2021.24498
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) enables many data owners (e.g., mobile devices) to train a joint ML model (e.g., a next-word prediction classifier) without the need of sharing their private training data. However, FL is known to be susceptible to poisoning attacks by malicious participants (e.g., adversary-owned mobile devices) who aim at hampering the accuracy of the jointly trained model through sending malicious inputs during the federated training process. In this paper, we present a generic framework for model poisoning attacks on FL. We show that our framework leads to poisoning attacks that substantially outperform state-of-the-art model poisoning attacks by large margins. For instance, our attacks result in 1.5 x to 60x higher reductions in the accuracy of FL models compared to previously discovered poisoning attacks. Our work demonstrates that existing Byzantine-robust FL algorithms are significantly more susceptible to model poisoning than previously thought. Motivated by this, we design a defense against FL poisoning, called divide-and-conquer (DnC). We demonstrate that DnC outperforms all existing Byzantine-robust FL algorithms in defeating model poisoning attacks, specifically, it is 2.5 x to 12 x more resilient in our experiments with different datasets and models.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
    Fang, Minghong
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Nenqiang
    PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1623 - 1640
  • [2] Federated Learning: A Comparative Study of Defenses Against Poisoning Attacks
    Carvalho, Ines
    Huff, Kenton
    Gruenwald, Le
    Bernardino, Jorge
    APPLIED SCIENCES-BASEL, 2024, 14 (22):
  • [3] Blades: A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated Learning
    Li, Sheughui
    Ngai, Edith C. H.
    Ye, Fanghua
    Ju, Li
    Zhang, Tianru
    Voigt, Thicmo
    9TH ACM/IEEE CONFERENCE ON INTERNET OF THINGS DESIGN AND IMPLEMENTATION, IOTDI 2024, 2024, : 158 - 169
  • [4] Dynamic defense against byzantine poisoning attacks in federated learning
    Rodriguez-Barroso, Nuria
    Martinez-Camara, Eugenio
    Victoria Luzon, M.
    Herrera, Francisco
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 133 : 1 - 9
  • [5] Defense against local model poisoning attacks to byzantine-robust federated learning
    LU Shiwei
    LI Ruihu
    CHEN Xuan
    MA Yuena
    Frontiers of Computer Science, 2022, 16 (06)
  • [6] Defense against local model poisoning attacks to byzantine-robust federated learning
    Lu, Shiwei
    Li, Ruihu
    Chen, Xuan
    Ma, Yuena
    FRONTIERS OF COMPUTER SCIENCE, 2022, 16 (06)
  • [7] Defense against local model poisoning attacks to byzantine-robust federated learning
    Shiwei Lu
    Ruihu Li
    Xuan Chen
    Yuena Ma
    Frontiers of Computer Science, 2022, 16
  • [8] Enhancing Model Poisoning Attacks to Byzantine-Robust Federated Learning via Critical Learning Periods
    Yan, Gang
    Wang, Hao
    Yuan, Xu
    Li, Jian
    PROCEEDINGS OF 27TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2024, 2024, : 496 - 512
  • [9] FLRAM: Robust Aggregation Technique for Defense against Byzantine Poisoning Attacks in Federated Learning
    Chen, Haitian
    Chen, Xuebin
    Peng, Lulu
    Ma, Ruikui
    ELECTRONICS, 2023, 12 (21)
  • [10] A Survey of Federated Learning: Review, Attacks, Defenses
    Yao, Zhongyi
    Cheng, Jieren
    Fu, Cebin
    Huang, Zhennan
    BIG DATA AND SECURITY, ICBDS 2023, PT I, 2024, 2099 : 166 - 177