Towards Practical Backdoor Attacks on Federated Learning Systems

被引:1
|
作者
Shi, Chenghui [1 ]
Ji, Shouling [1 ]
Pan, Xudong [2 ]
Zhang, Xuhong [1 ]
Zhang, Mi [2 ]
Yang, Min [2 ]
Zhou, Jun [3 ]
Yin, Jianwei [1 ]
Wang, Ting [4 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Fudan Univ, Sch Comp Sci & Technol, Shanghai 200433, Peoples R China
[3] Ant Grp, Hangzhou 310000, Peoples R China
[4] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
关键词
Neurons; Computational modeling; Training; Task analysis; Data models; Servers; Face recognition; Federated learning; backdoor attack; deep neural networks;
D O I
10.1109/TDSC.2024.3376790
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is nowadays one of the most promising paradigms for privacy-preserving distributed learning. Without revealing its local private data to outsiders, a client in FL systems collaborates to build a global Deep Neural Network (DNN) by submitting its local model parameter update to a central server for iterative aggregation. With secure multi-party computation protocols, the submitted update of any client is also by design invisible to the server. Seemingly, this standard design is a win-win for client privacy and service provider utility. Ironically, any attacker may also use manipulated or impersonated client to submit almost any attack payloads under the umbrella of the FL protocol itself. In this work, we craft a practical backdoor attack on FL systems that is proved to be simultaneously effective and stealthy on diverse use cases of FL systems and leading commercial FL platforms in the real world. Basically, we first identify a small number of redundant neurons which tend to be rarely or slightly updated in the model, and then inject backdoor into these redundant neurons instead of the whole model. In this way, our backdoor attack can achieve a high attack success rate with a minor impact on the accuracy of the original task. As countermeasures, we further consider several common technical choices including robust aggregation mechanisms, differential privacy mechanism,s and network pruning. However, none of the defenses show desirable defense capability against our backdoor attack. Our results strongly highlight the vulnerability of existing FL systems against backdoor attacks and the urgent need to develop more effective defense mechanisms.
引用
收藏
页码:5431 / 5447
页数:17
相关论文
共 50 条
  • [41] FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks
    Castillo, Jorge
    Rieger, Phillip
    Fereidooni, Hossein
    Chen, Qian
    Sadeghi, Ahmad-Reza
    39TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2023, 2023, : 647 - 661
  • [42] Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions
    Nguyen, Thuy Dung
    Nguyen, Tuan
    Nguyen, Phi Le
    Pham, Hieu H.
    Doan, Khoa D.
    Wong, Kok-Seng
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127
  • [43] Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
    Yang, Jie
    Zheng, Jun
    Wang, Haochen
    Li, Jiaxing
    Sun, Haipeng
    Han, Weifeng
    Jiang, Nan
    Tan, Yu-An
    SENSORS, 2023, 23 (03)
  • [44] Resisting Distributed Backdoor Attacks in Federated Learning: A Dynamic Norm Clipping Approach
    Guo, Yifan
    Wang, Qianlong
    Ji, Tianxi
    Wang, Xufei
    Li, Pan
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 1172 - 1182
  • [45] Towards Backdoor Attacks and Defense in Robust Machine Learning Models
    Soremekun, Ezekiel
    Udeshi, Sakshi
    Chattopadhyay, Sudipta
    COMPUTERS & SECURITY, 2023, 127
  • [46] How To Backdoor Federated Learning
    Bagdasaryan, Eugene
    Veit, Andreas
    Hua, Yiqing
    Estrin, Deborah
    Shmatikov, Vitaly
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2938 - 2947
  • [47] Mitigating backdoor attacks in Federated Learning based intrusion detection systems through Neuron Synaptic Weight Adjustment
    Zukaib, Umer
    Cui, Xiaohui
    KNOWLEDGE-BASED SYSTEMS, 2025, 314
  • [48] Invariant Aggregator for Defending against Federated Backdoor Attacks
    Wang, Xiaoyang
    Dimitriadis, Dimitrios
    Koyejo, Sanmi
    Tople, Shruti
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [49] A Practical Clean -Label Backdoor Attack with Limited Information in Vertical Federated Learning
    Chen, Peng
    Yang, Jirui
    Lin, Junxiong
    Lu, Zhihui
    Duan, Qiang
    Chai, Hongfeng
    23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023, 2023, : 41 - 50
  • [50] FLSAD: Defending Backdoor Attacks in Federated Learning via Self-Attention Distillation
    Chen, Lucheng
    Liu, Xiaoshuang
    Wang, Ailing
    Zhai, Weiwei
    Cheng, Xiang
    SYMMETRY-BASEL, 2024, 16 (11):