Backdoor Attacks to Deep Learning Models and Countermeasures: A Survey

被引:5
|
作者
Li, Yudong [1 ]
Zhang, Shigeng [1 ,2 ]
Wang, Weiping [1 ]
Song, Hong [1 ]
机构
[1] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Peoples R China
[2] Parallel & Distributed Proc Lab PDL Changsha, Sci & Technol, Changsha 410003, Peoples R China
关键词
Deep learning; Face recognition; Data models; Computational modeling; Training; Perturbation methods; Video on demand; security; backdoor attack;
D O I
10.1109/OJCS.2023.3267221
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Backdoor attacks have severely threatened deep neural network (DNN) models in the past several years. In backdoor attacks, the attackers try to plant hidden backdoors into DNN models, either in the training or inference stage, to mislead the output of the model when the input contains some specified triggers without affecting the prediction of normal inputs not containing the triggers. As a rapidly developing topic, numerous works on designing various backdoor attacks and developing techniques to defend against such attacks have been proposed in recent years. However, a comprehensive and holistic overview of backdoor attacks and countermeasures is still missing. In this paper, we provide a systematic overview of the design of backdoor attacks and the defense strategies to defend against backdoor attacks, covering the latest published works. We review representative backdoor attacks and defense strategies in both the computer vision domain and other domains, discuss their pros and cons, and make comparisons among them. We outline key challenges to be addressed and potential research directions in the future.
引用
收藏
页码:134 / 146
页数:13
相关论文
共 50 条
  • [31] Backdoor Attacks Against Deep Learning-based Massive MIMO Localization
    Zhao, Tianya
    Wang, Xuyu
    Mao, Shiwen
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 2796 - 2801
  • [32] Survey of Attacks and Countermeasures for SDN
    BAI Jiasong
    ZHANG Menghao
    BI Jun
    ZTE Communications, 2018, 16 (04) : 3 - 8
  • [33] Backdoor Attacks to Deep Neural Networks: A Survey of the Literature, Challenges, and Future Research Directions
    Mengara, Orson
    Avila, Anderson
    Falk, Tiago H.
    IEEE ACCESS, 2024, 12 : 29004 - 29023
  • [34] Adversarial Attacks on Deep-learning Models in Natural Language Processing: A Survey
    Zhang, Wei Emma
    Sheng, Quan Z.
    Alhazmi, Ahoud
    Li, Chenliang
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2020, 11 (03)
  • [35] A Survey of Adversarial Attacks: An Open Issue for Deep Learning Sentiment Analysis Models
    Vazquez-Hernandez, Monserrat
    Morales-Rosales, Luis Alberto
    Algredo-Badillo, Ignacio
    Fernandez-Gregorio, Sofia Isabel
    Rodriguez-Rangel, Hector
    Cordoba-Tlaxcalteco, Maria-Luisa
    APPLIED SCIENCES-BASEL, 2024, 14 (11):
  • [36] A Survey of Backdoor Attacks and Defenses on Neural Networks
    Wang, Xu-Tong
    Yin, Jie
    Liu, Chao-Ge
    Xu, Chen-Chen
    Huang, Hao
    Wang, Zhi
    Zhang, Fang-Jiao
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (08): : 1713 - 1743
  • [37] A Survey on Adversarial Text Attacks on Deep Learning Models in Natural Language Processing
    Deepan, S.
    Torres-Cruz, Fred
    Placido-Lerma, Ruben L.
    Udhayakumar, R.
    Anuradha, S.
    Kapila, Dhiraj
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON DATA SCIENCE, MACHINE LEARNING AND APPLICATIONS, VOL 1, ICDSMLA 2023, 2025, 1273 : 1059 - 1067
  • [38] Latent Backdoor Attacks on Deep Neural Networks
    Yao, Yuanshun
    Li, Huiying
    Zheng, Haitao
    Zhao, Ben Y.
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 2041 - 2055
  • [39] A survey on robustness attacks for deep code models
    Qu, Yubin
    Huang, Song
    Yao, Yongming
    AUTOMATED SOFTWARE ENGINEERING, 2024, 31 (02)
  • [40] Privacy Issues, Attacks, Countermeasures and Open Problems in Federated Learning: A Survey
    Guembe, Blessing
    Misra, Sanjay
    Azeta, Ambrose
    APPLIED ARTIFICIAL INTELLIGENCE, 2024, 38 (01)