How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses

被引:8
|
作者
Costa, Joana C. [1 ]
Roxo, Tiago [2 ]
Proenca, Hugo
Inacio, Pedro Ricardo Morais
机构
[1] Univ Beira Interior, Sins Lab, Inst Telecomunicacoes, P-6201001 Covilha, Portugal
[2] Univ Beira Interior, Dept Comp Sci, P-6201001 Covilha, Portugal
关键词
Surveys; Transformers; Perturbation methods; Object recognition; Deep learning; Closed box; Vectors; Adversarial attacks; adversarial defenses; datasets; evaluation metrics; review; vision transformers; RECOGNITION; VISION;
D O I
10.1109/ACCESS.2024.3395118
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Learning is currently used to perform multiple tasks, such as object recognition, face recognition, and natural language processing. However, Deep Neural Networks (DNNs) are vulnerable to perturbations that alter the network prediction, named adversarial examples, which raise concerns regarding the usage of DNNs in critical areas, such as Self-driving Vehicles, Malware Detection, and Healthcare. This paper compiles the most recent adversarial attacks in Object Recognition, grouped by the attacker capacity and knowledge, and modern defenses clustered by protection strategies, providing background details to understand the topic of adversarial attacks and defenses. The new advances regarding Vision Transformers are also presented, which have not been previously done in the literature, showing the resemblance and dissimilarity between this architecture and Convolutional Neural Networks. Furthermore, the most used datasets and metrics in adversarial settings are summarized, along with datasets requiring further evaluation, which is another contribution. This survey compares the state-of-the-art results under different attacks for multiple architectures and compiles all the adversarial attacks and defenses with available code, comprising significant contributions to the literature. Finally, practical applications are discussed, and open issues are identified, being a reference for future works.
引用
收藏
页码:61113 / 61136
页数:24
相关论文
共 50 条
  • [1] A Survey on Adversarial Attacks and Defenses for Deep Reinforcement Learning
    Liu A.-S.
    Guo J.
    Li S.-M.
    Xiao Y.-S.
    Liu X.-L.
    Tao D.-C.
    Jisuanji Xuebao/Chinese Journal of Computers, 2023, 46 (08): : 1553 - 1576
  • [2] Adversarial attacks and defenses in deep learning for image recognition: A survey
    Wang, Jia
    Wang, Chengyu
    Lin, Qiuzhen
    Luo, Chengwen
    Wu, Chao
    Li, Jianqiang
    NEUROCOMPUTING, 2022, 514 : 162 - 181
  • [3] Adversarial Attacks and Defenses in Deep Learning
    Ren, Kui
    Zheng, Tianhang
    Qin, Zhan
    Liu, Xue
    ENGINEERING, 2020, 6 (03) : 346 - 360
  • [4] Adversarial Attacks and Defenses for Deep Learning Models
    Li M.
    Jiang P.
    Wang Q.
    Shen C.
    Li Q.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (05): : 909 - 926
  • [5] Adversarial Examples: Attacks and Defenses for Deep Learning
    Yu, Xiaoyong
    He, Pan
    Zhu, Qile
    Li, Xiaolin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) : 2805 - 2824
  • [6] Adversarial examples: A survey of attacks and defenses in deep learning-enabled cybersecurity systems
    Macas, Mayra
    Wu, Chunming
    Fuertes, Walter
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [7] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [8] Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity
    Zhou, Shuai
    Liu, Chi
    Ye, Dayong
    Zhu, Tianqing
    Zhou, Wanlei
    Yu, Philip S.
    ACM COMPUTING SURVEYS, 2023, 55 (08)
  • [9] Visual privacy attacks and defenses in deep learning: a survey
    Zhang, Guangsheng
    Liu, Bo
    Zhu, Tianqing
    Zhou, Andi
    Zhou, Wanlei
    ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (06) : 4347 - 4401
  • [10] A Survey on Deep Learning for Website Fingerprinting Attacks and Defenses
    Liu, Peidong
    He, Longtao
    Li, Zhoujun
    IEEE ACCESS, 2023, 11 : 26033 - 26047