Unmasking the Vulnerabilities of Deep Learning Models: A Multi-Dimensional Analysis of Adversarial Attacks and Defenses

被引:0
|
作者
Juraev, Firuz [1 ]
Abuhamad, Mohammed [2 ]
Chan-Tin, Eric [2 ]
Thiruvathukal, George K. [2 ]
Abuhmed, Tamer [1 ]
机构
[1] Sungkyunkwan Univ, Dept Comp Sci & Engn, Suwon, South Korea
[2] Loyola Univ, Dept Comp Sci, Chicago, IL USA
基金
新加坡国家研究基金会;
关键词
Threat Analysis; Deep Learning; Black-box Attacks; Adversarial Perturbations; Defensive Techniques;
D O I
10.1109/SVCC61185.2024.10637364
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Learning (DL) is rapidly maturing to the point that it can be used in safety- and security-crucial applications, such as self-driving vehicles, surveillance, drones, and robots. However, adversarial samples, which are undetectable to the human eye, pose a serious threat that can cause the model to misbehave and compromise the performance of such applications. Addressing the robustness of DL models has become crucial to understanding and defending against adversarial attacks. In this study, we perform comprehensive experiments to examine the effect of adversarial attacks and defenses on various model architectures across well-known datasets. Our research focuses on black-box attacks such as SimBA, HopSkipJump, MGAAttack, and boundary attacks, as well as preprocessor-based defensive mechanisms, including bits squeezing, median smoothing, and JPEG filter. Experimenting with various models, our results demonstrate that the level of noise needed for the attack increases as the number of layers increases. Moreover, the attack success rate decreases as the number of layers increases. This indicates that model complexity and robustness have a significant relationship. Investigating the diversity and robustness relationship, our experiments with diverse models show that having a large number of parameters does not imply higher robustness. Our experiments extend to show the effects of the training dataset on model robustness. Using various datasets such as ImageNet-1000, CIFAR-100, and CIFAR-10 are used to evaluate the black-box attacks. Considering the multiple dimensions of our analysis, e.g., model complexity and training dataset, we examined the behavior of black-box attacks when models apply defenses. Our results show that applying defense strategies can significantly reduce attack effectiveness. This research provides in-depth analysis and insight into the robustness of DL models against various attacks, and defenses.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Adversarial Attacks and Defenses Against Deep Learning Under the Cloud-Edge-Terminal Scenes
    Li Q.
    Lin C.
    Yang Y.
    Shen C.
    Fang L.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2022, 59 (10): : 2109 - 2129
  • [32] Vulnerabilities in Video Quality Assessment Models: The Challenge of Adversarial Attacks
    Zhang, Ao-Xiang
    Ran, Yu
    Tang, Weixuan
    Wang, Yuan-Gen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [33] Adversarial Attacks in Underwater Acoustic Target Recognition with Deep Learning Models
    Feng, Sheng
    Zhu, Xiaoqian
    Ma, Shuqing
    Lan, Qiang
    REMOTE SENSING, 2023, 15 (22)
  • [34] Addressing Adversarial Attacks in IoT Using Deep Learning AI Models
    Bommana, Sesibhushana Rao
    Veeramachaneni, Sreehari
    Ahmed, Syed Ershad
    Srinivas, M. B.
    IEEE ACCESS, 2025, 13 : 50437 - 50449
  • [35] Outcomes of Adversarial Attacks on Deep Learning Models for Ophthalmology Imaging Domains
    Yoo, Tae Keun
    Choi, Joon Yul
    JAMA OPHTHALMOLOGY, 2020, 138 (11) : 1213 - 1215
  • [36] Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW
    Villegas-Ch, William
    Jaramillo-Alcazar, Angel
    Lujan-Mora, Sergio
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (01)
  • [37] Systematic Review of Multi-Dimensional Vulnerabilities in the Himalayas
    Sultan, Hameeda
    Zhan, Jinyan
    Rashid, Wajid
    Chu, Xi
    Bohnett, Eve
    INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH, 2022, 19 (19)
  • [38] Deep learning adversarial attacks and defenses in autonomous vehicles: a systematic literature review from a safety perspective
    Ibrahum, Ahmed Dawod Mohammed
    Hussain, Manzoor
    Hong, Jang-Eui
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 58 (01)
  • [39] Unveiling vulnerabilities in deep learning-based malware detection: Differential privacy driven adversarial attacks
    Taheri, Rahim
    Shojafar, Mohammad
    Arabikhan, Farzad
    Gegov, Alexander
    COMPUTERS & SECURITY, 2024, 146
  • [40] Unmasking the Botnet Attacks: A Hybrid Deep Learning Approach
    Nayan, Pranta Nath
    Mahajabin, Maisha
    Rahman, Abdur
    Maisha, Nusrat
    Chowdhury, Md. Tanvir
    Uddin, Md. Mohsin
    Tuhin, Rashedul Amin
    Khan, M. Saddam Hossain
    SMART TRENDS IN COMPUTING AND COMMUNICATIONS, VOL 5, SMARTCOM 2024, 2024, 949 : 441 - 451