Towards Practical Backdoor Attacks on Federated Learning Systems

被引:1
|
作者
Shi, Chenghui [1 ]
Ji, Shouling [1 ]
Pan, Xudong [2 ]
Zhang, Xuhong [1 ]
Zhang, Mi [2 ]
Yang, Min [2 ]
Zhou, Jun [3 ]
Yin, Jianwei [1 ]
Wang, Ting [4 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Fudan Univ, Sch Comp Sci & Technol, Shanghai 200433, Peoples R China
[3] Ant Grp, Hangzhou 310000, Peoples R China
[4] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
关键词
Neurons; Computational modeling; Training; Task analysis; Data models; Servers; Face recognition; Federated learning; backdoor attack; deep neural networks;
D O I
10.1109/TDSC.2024.3376790
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is nowadays one of the most promising paradigms for privacy-preserving distributed learning. Without revealing its local private data to outsiders, a client in FL systems collaborates to build a global Deep Neural Network (DNN) by submitting its local model parameter update to a central server for iterative aggregation. With secure multi-party computation protocols, the submitted update of any client is also by design invisible to the server. Seemingly, this standard design is a win-win for client privacy and service provider utility. Ironically, any attacker may also use manipulated or impersonated client to submit almost any attack payloads under the umbrella of the FL protocol itself. In this work, we craft a practical backdoor attack on FL systems that is proved to be simultaneously effective and stealthy on diverse use cases of FL systems and leading commercial FL platforms in the real world. Basically, we first identify a small number of redundant neurons which tend to be rarely or slightly updated in the model, and then inject backdoor into these redundant neurons instead of the whole model. In this way, our backdoor attack can achieve a high attack success rate with a minor impact on the accuracy of the original task. As countermeasures, we further consider several common technical choices including robust aggregation mechanisms, differential privacy mechanism,s and network pruning. However, none of the defenses show desirable defense capability against our backdoor attack. Our results strongly highlight the vulnerability of existing FL systems against backdoor attacks and the urgent need to develop more effective defense mechanisms.
引用
收藏
页码:5431 / 5447
页数:17
相关论文
共 50 条
  • [21] FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks
    Sun, Hanqi
    Zhu, Wanquan
    Sun, Ziyu
    Cao, Mingsheng
    Liu, Wenbin
    ELECTRONICS, 2023, 12 (23)
  • [22] Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
    Mi, Yuxi
    Sun, Yiheng
    Guan, Jihong
    Zhou, Shuigeng
    WEB AND BIG DATA, PT III, APWEB-WAIM 2023, 2024, 14333 : 111 - 126
  • [23] Universal adversarial backdoor attacks to fool vertical federated learning
    Chen, Peng
    Du, Xin
    Lu, Zhihui
    Chai, Hongfeng
    COMPUTERS & SECURITY, 2024, 137
  • [24] DEFENDING AGAINST BACKDOOR ATTACKS IN FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY
    Miao, Lu
    Yang, Wei
    Hu, Rong
    Li, Lu
    Huang, Liusheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2999 - 3003
  • [25] BADFSS: Backdoor Attacks on Federated Self-Supervised Learning
    Zhang, Jiale
    Zhu, Chengcheng
    Di Wu
    Sun, Xiaobing
    Yong, Jianming
    Long, Guodong
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 548 - 558
  • [26] Defending Against Data and Model Backdoor Attacks in Federated Learning
    Wang, Hao
    Mu, Xuejiao
    Wang, Dong
    Xu, Qiang
    Li, Kaiju
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 39276 - 39294
  • [27] Adaptive Backdoor Attacks Against Dataset Distillation for Federated Learning
    Chai, Ze
    Gao, Zhipeng
    Lin, Yijing
    Zhao, Chen
    Yu, Xinlei
    Xie, Zhiqiang
    ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, : 4614 - 4619
  • [28] CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
    Xie, Chulin
    Chen, Minghao
    Chen, Pin-Yu
    Li, Bo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [29] CoBA: Collusive Backdoor Attacks With Optimized Trigger to Federated Learning
    Lyu, Xiaoting
    Han, Yufei
    Wang, Wei
    Liu, Jingkai
    Wang, Bin
    Chen, Kai
    Li, Yidong
    Liu, Jiqiang
    Zhang, Xiangliang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1506 - 1518
  • [30] MITDBA: Mitigating Dynamic Backdoor Attacks in Federated Learning for IoT Applications
    Wang, Yongkang
    Zhai, Di-Hua
    Han, Dongyu
    Guan, Yuyin
    Xia, Yuanqing
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (06): : 10115 - 10132