Backdoor Attacks to Deep Learning Models and Countermeasures: A Survey

被引:5
|
作者
Li, Yudong [1 ]
Zhang, Shigeng [1 ,2 ]
Wang, Weiping [1 ]
Song, Hong [1 ]
机构
[1] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Peoples R China
[2] Parallel & Distributed Proc Lab PDL Changsha, Sci & Technol, Changsha 410003, Peoples R China
关键词
Deep learning; Face recognition; Data models; Computational modeling; Training; Perturbation methods; Video on demand; security; backdoor attack;
D O I
10.1109/OJCS.2023.3267221
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Backdoor attacks have severely threatened deep neural network (DNN) models in the past several years. In backdoor attacks, the attackers try to plant hidden backdoors into DNN models, either in the training or inference stage, to mislead the output of the model when the input contains some specified triggers without affecting the prediction of normal inputs not containing the triggers. As a rapidly developing topic, numerous works on designing various backdoor attacks and developing techniques to defend against such attacks have been proposed in recent years. However, a comprehensive and holistic overview of backdoor attacks and countermeasures is still missing. In this paper, we provide a systematic overview of the design of backdoor attacks and the defense strategies to defend against backdoor attacks, covering the latest published works. We review representative backdoor attacks and defense strategies in both the computer vision domain and other domains, discuss their pros and cons, and make comparisons among them. We outline key challenges to be addressed and potential research directions in the future.
引用
收藏
页码:134 / 146
页数:13
相关论文
共 50 条
  • [41] Backdoor attacks against deep reinforcement learning based traffic signal control systems
    Heng Zhang
    Jun Gu
    Zhikun Zhang
    Linkang Du
    Yongmin Zhang
    Yan Ren
    Jian Zhang
    Hongran Li
    Peer-to-Peer Networking and Applications, 2023, 16 : 466 - 474
  • [42] Going Deep: Using deep learning techniques with simplified mathematical models against XOR BR and TBR PUFs (Attacks and Countermeasures)
    Khalafalla, Mahmoud
    Elmohr, Mahmoud A.
    Gebotys, Catherine
    PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL SYMPOSIUM ON HARDWARE ORIENTED SECURITY AND TRUST (HOST), 2020, : 80 - 90
  • [43] Backdoor Attacks on Self-Supervised Learning
    Saha, Aniruddha
    Tejankar, Ajinkya
    Koohpayegani, Soroush Abbasi
    Pirsiavash, Hamed
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13327 - 13336
  • [44] Backdoor attacks on unsupervised graph representation learning
    Feng, Bingdao
    Jin, Di
    Wang, Xiaobao
    Cheng, Fangyu
    Guo, Siqi
    NEURAL NETWORKS, 2024, 180
  • [45] Backdoor attacks against deep reinforcement learning based traffic signal control systems
    Zhang, Heng
    Gu, Jun
    Zhang, Zhikun
    Du, Linkang
    Zhang, Yongmin
    Ren, Yan
    Zhang, Jian
    Li, Hongran
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2023, 16 (01) : 466 - 474
  • [46] Backdoor Attacks on Deep Neural Networks via Transfer Learning from Natural Images
    Matsuo, Yuki
    Takemoto, Kazuhiro
    APPLIED SCIENCES-BASEL, 2022, 12 (24):
  • [47] Pixdoor: A Pixel-space Backdoor Attack on Deep Learning Models
    Arshad, Tram
    Asghar, Mamoona Naveed
    Qiao, Yuansong
    Lee, Brian
    Ye, Yuhang
    29TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2021), 2021, : 681 - 685
  • [48] Optimally Mitigating Backdoor Attacks in Federated Learning
    Walter, Kane
    Mohammady, Meisam
    Nepal, Surya
    Kanhere, Salil S.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2949 - 2963
  • [49] ANODYNE: Mitigating backdoor attacks in federated learning
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [50] A Survey on Jamming Attacks and Countermeasures in WSNs
    Mpitziopoulos, Aristides
    Gavalas, Damianos
    Konstantopoulos, Charalampos
    Pantziou, Grammati
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2009, 11 (04): : 42 - 56