Unmasking the Vulnerabilities of Deep Learning Models: A Multi-Dimensional Analysis of Adversarial Attacks and Defenses

被引:0
|
作者
Juraev, Firuz [1 ]
Abuhamad, Mohammed [2 ]
Chan-Tin, Eric [2 ]
Thiruvathukal, George K. [2 ]
Abuhmed, Tamer [1 ]
机构
[1] Sungkyunkwan Univ, Dept Comp Sci & Engn, Suwon, South Korea
[2] Loyola Univ, Dept Comp Sci, Chicago, IL USA
基金
新加坡国家研究基金会;
关键词
Threat Analysis; Deep Learning; Black-box Attacks; Adversarial Perturbations; Defensive Techniques;
D O I
10.1109/SVCC61185.2024.10637364
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Learning (DL) is rapidly maturing to the point that it can be used in safety- and security-crucial applications, such as self-driving vehicles, surveillance, drones, and robots. However, adversarial samples, which are undetectable to the human eye, pose a serious threat that can cause the model to misbehave and compromise the performance of such applications. Addressing the robustness of DL models has become crucial to understanding and defending against adversarial attacks. In this study, we perform comprehensive experiments to examine the effect of adversarial attacks and defenses on various model architectures across well-known datasets. Our research focuses on black-box attacks such as SimBA, HopSkipJump, MGAAttack, and boundary attacks, as well as preprocessor-based defensive mechanisms, including bits squeezing, median smoothing, and JPEG filter. Experimenting with various models, our results demonstrate that the level of noise needed for the attack increases as the number of layers increases. Moreover, the attack success rate decreases as the number of layers increases. This indicates that model complexity and robustness have a significant relationship. Investigating the diversity and robustness relationship, our experiments with diverse models show that having a large number of parameters does not imply higher robustness. Our experiments extend to show the effects of the training dataset on model robustness. Using various datasets such as ImageNet-1000, CIFAR-100, and CIFAR-10 are used to evaluate the black-box attacks. Considering the multiple dimensions of our analysis, e.g., model complexity and training dataset, we examined the behavior of black-box attacks when models apply defenses. Our results show that applying defense strategies can significantly reduce attack effectiveness. This research provides in-depth analysis and insight into the robustness of DL models against various attacks, and defenses.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Adversarial Attacks and Defenses for Deep Learning Models
    Li M.
    Jiang P.
    Wang Q.
    Shen C.
    Li Q.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (05): : 909 - 926
  • [2] Adversarial Attacks and Defenses in Deep Learning
    Ren, Kui
    Zheng, Tianhang
    Qin, Zhan
    Liu, Xue
    ENGINEERING, 2020, 6 (03) : 346 - 360
  • [3] Exploring Security Vulnerabilities of Deep Learning Models by Adversarial Attacks
    Fu, Xiaopeng
    Gu, Zhaoquan
    Han, Weihong
    Qian, Yaguan
    Wang, Bin
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021
  • [4] Exploring Security Vulnerabilities of Deep Learning Models by Adversarial Attacks
    Fu, Xiaopeng
    Gu, Zhaoquan
    Han, Weihong
    Qian, Yaguan
    Wang, Bin
    Wireless Communications and Mobile Computing, 2021, 2021
  • [5] Adversarial Examples: Attacks and Defenses for Deep Learning
    Yu, Xiaoyong
    He, Pan
    Zhu, Qile
    Li, Xiaolin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) : 2805 - 2824
  • [6] On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses
    Chhabra, Anshuman
    Sekhari, Ashwin
    Mohapatra, Prasant
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [7] A Survey on Adversarial Attacks and Defenses for Deep Reinforcement Learning
    Liu A.-S.
    Guo J.
    Li S.-M.
    Xiao Y.-S.
    Liu X.-L.
    Tao D.-C.
    Jisuanji Xuebao/Chinese Journal of Computers, 2023, 46 (08): : 1553 - 1576
  • [8] Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity
    Zhou, Shuai
    Liu, Chi
    Ye, Dayong
    Zhu, Tianqing
    Zhou, Wanlei
    Yu, Philip S.
    ACM COMPUTING SURVEYS, 2023, 55 (08)
  • [9] Adversarial attacks and defenses in deep learning for image recognition: A survey
    Wang, Jia
    Wang, Chengyu
    Lin, Qiuzhen
    Luo, Chengwen
    Wu, Chao
    Li, Jianqiang
    NEUROCOMPUTING, 2022, 514 : 162 - 181
  • [10] An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models
    Deng, Yao
    Zheng, Xi
    Zhang, Tianyi
    Chen, Chen
    Lou, Guannan
    Kim, Miryung
    2020 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS (PERCOM 2020), 2020,