Towards Practical Backdoor Attacks on Federated Learning Systems

被引:1
|
作者
Shi, Chenghui [1 ]
Ji, Shouling [1 ]
Pan, Xudong [2 ]
Zhang, Xuhong [1 ]
Zhang, Mi [2 ]
Yang, Min [2 ]
Zhou, Jun [3 ]
Yin, Jianwei [1 ]
Wang, Ting [4 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Fudan Univ, Sch Comp Sci & Technol, Shanghai 200433, Peoples R China
[3] Ant Grp, Hangzhou 310000, Peoples R China
[4] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
关键词
Neurons; Computational modeling; Training; Task analysis; Data models; Servers; Face recognition; Federated learning; backdoor attack; deep neural networks;
D O I
10.1109/TDSC.2024.3376790
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is nowadays one of the most promising paradigms for privacy-preserving distributed learning. Without revealing its local private data to outsiders, a client in FL systems collaborates to build a global Deep Neural Network (DNN) by submitting its local model parameter update to a central server for iterative aggregation. With secure multi-party computation protocols, the submitted update of any client is also by design invisible to the server. Seemingly, this standard design is a win-win for client privacy and service provider utility. Ironically, any attacker may also use manipulated or impersonated client to submit almost any attack payloads under the umbrella of the FL protocol itself. In this work, we craft a practical backdoor attack on FL systems that is proved to be simultaneously effective and stealthy on diverse use cases of FL systems and leading commercial FL platforms in the real world. Basically, we first identify a small number of redundant neurons which tend to be rarely or slightly updated in the model, and then inject backdoor into these redundant neurons instead of the whole model. In this way, our backdoor attack can achieve a high attack success rate with a minor impact on the accuracy of the original task. As countermeasures, we further consider several common technical choices including robust aggregation mechanisms, differential privacy mechanism,s and network pruning. However, none of the defenses show desirable defense capability against our backdoor attack. Our results strongly highlight the vulnerability of existing FL systems against backdoor attacks and the urgent need to develop more effective defense mechanisms.
引用
收藏
页码:5431 / 5447
页数:17
相关论文
共 50 条
  • [1] Towards defending adaptive backdoor attacks in Federated Learning
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5078 - 5084
  • [2] IBA: Towards Irreversible Backdoor Attacks in Federated Learning
    Dung Thuy Nguyen
    Tuan Nguyen
    Tuan Anh Tran
    Doan, Khoa D.
    Wong, Kok-Seng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] Practical and General Backdoor Attacks Against Vertical Federated Learning
    Xuan, Yuexin
    Chen, Xiaojun
    Zhao, Zhendong
    Tang, Bisheng
    Dong, Ye
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 402 - 417
  • [4] Unlearning Backdoor Attacks in Federated Learning
    Wu, Chen
    Zhu, Sencun
    Mitra, Prasenjit
    Wang, Wei
    2024 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS 2024, 2024,
  • [5] RoPE: Defending against backdoor attacks in federated learning systems
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [6] Collusive Backdoor Attacks in Federated Learning Frameworks for IoT Systems
    Alharbi, Saier
    Guo, Yifan
    Yu, Wei
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (11): : 19694 - 19707
  • [7] Optimally Mitigating Backdoor Attacks in Federated Learning
    Walter, Kane
    Mohammady, Meisam
    Nepal, Surya
    Kanhere, Salil S.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2949 - 2963
  • [8] ANODYNE: Mitigating backdoor attacks in federated learning
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [9] BadVFL: Backdoor Attacks in Vertical Federated Learning
    Naseri, Mohammad
    Han, Yufei
    De Cristofaro, Emiliano
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 2013 - 2028
  • [10] SARS: A Personalized Federated Learning Framework Towards Fairness and Robustness against Backdoor Attacks
    Zhang, Webin
    Li, Youpeng
    An, Lingling
    Wan, Bo
    Wang, Xuyu
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2024, 8 (04):