Backdoor Attacks to Deep Learning Models and Countermeasures: A Survey

被引:5
|
作者
Li, Yudong [1 ]
Zhang, Shigeng [1 ,2 ]
Wang, Weiping [1 ]
Song, Hong [1 ]
机构
[1] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Peoples R China
[2] Parallel & Distributed Proc Lab PDL Changsha, Sci & Technol, Changsha 410003, Peoples R China
关键词
Deep learning; Face recognition; Data models; Computational modeling; Training; Perturbation methods; Video on demand; security; backdoor attack;
D O I
10.1109/OJCS.2023.3267221
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Backdoor attacks have severely threatened deep neural network (DNN) models in the past several years. In backdoor attacks, the attackers try to plant hidden backdoors into DNN models, either in the training or inference stage, to mislead the output of the model when the input contains some specified triggers without affecting the prediction of normal inputs not containing the triggers. As a rapidly developing topic, numerous works on designing various backdoor attacks and developing techniques to defend against such attacks have been proposed in recent years. However, a comprehensive and holistic overview of backdoor attacks and countermeasures is still missing. In this paper, we provide a systematic overview of the design of backdoor attacks and the defense strategies to defend against backdoor attacks, covering the latest published works. We review representative backdoor attacks and defense strategies in both the computer vision domain and other domains, discuss their pros and cons, and make comparisons among them. We outline key challenges to be addressed and potential research directions in the future.
引用
收藏
页码:134 / 146
页数:13
相关论文
共 50 条
  • [21] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    COMPUTERS & SECURITY, 2022, 120
  • [22] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    Computers and Security, 2022, 120
  • [23] BAPLe: Backdoor Attacks on Medical Foundational Models Using Prompt Learning
    Hanif, Asif
    Shamshad, Fahad
    Awais, Muhammad
    Naseer, Muzammal
    Khan, Fahad Shahbaz
    Nandakumar, Karthik
    Khan, Salman
    Anwer, Rao Muhammad
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XII, 2024, 15012 : 443 - 453
  • [24] One-to-N & N-to-One: Two Advanced Backdoor Attacks Against Deep Learning Models
    Xue, Mingfu
    He, Can
    Wang, Jian
    Liu, Weiqiang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (03) : 1562 - 1578
  • [25] Backdoor Learning: A Survey
    Li, Yiming
    Jiang, Yong
    Li, Zhifeng
    Xia, Shu-Tao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (01) : 5 - 22
  • [26] Backdoor Attacks against Learning Systems
    Ji, Yujie
    Zhang, Xinyang
    Wang, Ting
    2017 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY (CNS), 2017, : 191 - 199
  • [27] Data Security Issues in Deep Learning: Attacks, Countermeasures, and Opportunities
    Xu, Guowen
    Li, Hongwei
    Ren, Hao
    Yang, Kan
    Deng, Robert H.
    IEEE COMMUNICATIONS MAGAZINE, 2019, 57 (11) : 116 - 122
  • [28] Deep learning countermeasures for detecting replay speech attacks: a review
    Suresh Veesa
    Madhusudan Singh
    International Journal of Speech Technology, 2025, 28 (1) : 39 - 51
  • [29] Unlearning Backdoor Attacks in Federated Learning
    Wu, Chen
    Zhu, Sencun
    Mitra, Prasenjit
    Wang, Wei
    2024 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS 2024, 2024,
  • [30] Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions
    Nguyen, Thuy Dung
    Nguyen, Tuan
    Nguyen, Phi Le
    Pham, Hieu H.
    Doan, Khoa D.
    Wong, Kok-Seng
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127