Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW

被引:2
|
作者
Villegas-Ch, William [1 ]
Jaramillo-Alcazar, Angel [1 ]
Lujan-Mora, Sergio [2 ]
机构
[1] Univ Las Amer, Escuela Ingn Cibersegur, Fac Ingn Ciencias Aplicadas, Quito 170125, Ecuador
[2] Univ Alicante, Dept Lenguajes & Sistemas Informat, Alicante 03690, Spain
关键词
adversary examples; robustness of models; countermeasures; NEURAL-NETWORKS;
D O I
10.3390/bdcc8010008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images and analyze their impact on the model's classification accuracy. Additionally, image manipulation techniques were investigated as defensive measures against adversarial attacks. The results highlighted the model's vulnerability to conflicting examples: the Fast Gradient Signed Method effectively altered the original classifications, while the Carlini and Wagner method proved less effective. Promising approaches such as noise reduction, image compression, and Gaussian blurring were presented as effective countermeasures. These findings underscore the importance of addressing the vulnerability of machine learning models and the need to develop robust defenses against adversarial examples. This article emphasizes the urgency of addressing the threat posed by harmful standards in machine learning models, highlighting the relevance of implementing effective countermeasures and image manipulation techniques to mitigate the effects of adversarial attacks. These efforts are crucial to safeguarding model integrity and trust in an environment marked by constantly evolving hostile threats. An average 25% decrease in accuracy was observed for the VGG16 model when exposed to the Fast Gradient Signed Method and Projected Gradient Descent attacks, and an even more significant 35% decrease with the Carlini and Wagner method.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Defending Deep Learning Models Against Adversarial Attacks
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    INTERNATIONAL JOURNAL OF SOFTWARE SCIENCE AND COMPUTATIONAL INTELLIGENCE-IJSSCI, 2021, 13 (01): : 72 - 89
  • [2] Evaluating Robustness of Deep Image Super-Resolution Against Adversarial Attacks
    Choi, Jun-Ho
    Zhang, Huan
    Kim, Jun-Hyuk
    Hsieh, Cho-Jui
    Lee, Jong-Seok
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 303 - 311
  • [3] Evaluating Robustness Against Adversarial Attacks: A Representational Similarity Analysis Approach
    Liu, Chenyu
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [4] Evaluating Pretrained Deep Learning Models for Image Classification Against Individual and Ensemble Adversarial Attacks
    Rahman, Mafizur
    Roy, Prosenjit
    Frizell, Sherri S.
    Qian, Lijun
    IEEE ACCESS, 2025, 13 : 35230 - 35242
  • [5] ADVRET: An Adversarial Robustness Evaluating and Testing Platform for Deep Learning Models
    Ren, Fei
    Yang, Yonghui
    Hu, Chi
    Zhou, Yuyao
    Ma, Siyou
    2021 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY COMPANION (QRS-C 2021), 2021, : 9 - 14
  • [6] Unravelling Robustness of Deep Learning Based Face Recognition against Adversarial Attacks
    Goswami, Gaurav
    Ratha, Nalini
    Agarwal, Akshay
    Singh, Richa
    Vatsa, Mayank
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 6829 - 6836
  • [7] On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses
    Chhabra, Anshuman
    Sekhari, Ashwin
    Mohapatra, Prasant
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
    Pravin, Chandresh
    Martino, Ivan
    Nicosia, Giuseppe
    Ojha, Varun
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 16 - 28
  • [9] Robustness and Security in Deep Learning: Adversarial Attacks and Countermeasures
    Kaur, Navjot
    Singh, Someet
    Deore, Shailesh Shivaji
    Vidhate, Deepak A.
    Haridas, Divya
    Kosuri, Gopala Varma
    Kolhe, Mohini Ravindra
    JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (03) : 1250 - 1257
  • [10] DETECTSEC: Evaluating the robustness of object detection models to adversarial attacks
    Du, Tianyu
    Ji, Shouling
    Wang, Bo
    He, Sirui
    Li, Jinfeng
    Li, Bo
    Wei, Tao
    Jia, Yunhan
    Beyah, Raheem
    Wang, Ting
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (09) : 6463 - 6492