Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging

被引:1
|
作者
Kanca, Elif [1 ]
Ayas, Selen [2 ]
Kablan, Elif Baykal [1 ]
Ekinci, Murat [2 ]
机构
[1] Karadeniz Tech Univ, Dept Software Engn, Trabzon, Turkiye
[2] Karadeniz Tech Univ, Dept Comp Engn, Trabzon, Turkiye
关键词
Adversarial attacks; Adversarial defense; Vision transformer; Medical image classification; DIABETIC-RETINOPATHY; VALIDATION;
D O I
10.1007/s11517-024-03226-5
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Deep neural networks (DNNs) have demonstrated exceptional performance in medical image analysis. However, recent studies have uncovered significant vulnerabilities in DNN models, particularly their susceptibility to adversarial attacks that manipulate these models into making inaccurate predictions. Vision Transformers (ViTs), despite their advanced capabilities in medical imaging tasks, have not been thoroughly evaluated for their robustness against such attacks in this domain. This study addresses this research gap by conducting an extensive analysis of various adversarial attacks on ViTs specifically within medical imaging contexts. We explore adversarial training as a potential defense mechanism and assess the resilience of ViT models against state-of-the-art adversarial attacks and defense strategies using publicly available benchmark medical image datasets. Our findings reveal that ViTs are vulnerable to adversarial attacks even with minimal perturbations, although adversarial training significantly enhances their robustness, achieving over 80% classification accuracy. Additionally, we perform a comparative analysis with state-of-the-art convolutional neural network models, highlighting the unique strengths and weaknesses of ViTs in handling adversarial threats. This research advances the understanding of ViTs robustness in medical imaging and provides insights into their practical deployment in real-world scenarios.Graphical Abstract(left).
引用
收藏
页码:673 / 690
页数:18
相关论文
共 50 条
  • [31] Towards transferable adversarial attacks on vision transformers for image classification
    Guo, Xu
    Chen, Peng
    Lu, Zhihui
    Chai, Hongfeng
    Du, Xin
    Wu, Xudong
    JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 152
  • [32] ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness
    Theagarajan, Rajkumar
    Chen, Ming
    Bhanu, Bir
    Zhang, Jing
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6981 - 6989
  • [33] On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
    Roy, Deboleena
    Chakraborty, Indranil
    Ibrayev, Timur
    Roy, Kaushik
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 565 - 570
  • [34] Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW
    Villegas-Ch, William
    Jaramillo-Alcazar, Angel
    Lujan-Mora, Sergio
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (01)
  • [35] Generating Transferable Adversarial Examples against Vision Transformers
    Wang, Yuxuan
    Wang, Jiakai
    Yin, Zinxin
    Gong, Ruihao
    Wang, Jingyi
    Liu, Aishan
    Liu, Xianglong
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5181 - 5190
  • [36] Enhancing network robustness against malicious attacks
    Zeng, An
    Liu, Weiping
    PHYSICAL REVIEW E, 2012, 85 (06):
  • [37] A Feature Map Adversarial Attack Against Vision Transformers
    Altoub, Majed
    Mehmood, Rashid
    AlQurashi, Fahad
    Alqahtany, Saad
    Alsulami, Bassma
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (10) : 962 - 968
  • [38] DETECTSEC: Evaluating the robustness of object detection models to adversarial attacks
    Du, Tianyu
    Ji, Shouling
    Wang, Bo
    He, Sirui
    Li, Jinfeng
    Li, Bo
    Wei, Tao
    Jia, Yunhan
    Beyah, Raheem
    Wang, Ting
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (09) : 6463 - 6492
  • [39] ENHANCING THE ADVERSARIAL TRANSFERABILITY OF VISION TRANSFORMERS THROUGH PERTURBATION INVARIANCE
    Zeng Boheng
    2022 19TH INTERNATIONAL COMPUTER CONFERENCE ON WAVELET ACTIVE MEDIA TECHNOLOGY AND INFORMATION PROCESSING (ICCWAMTIP), 2022,
  • [40] Gradient-based Adversarial Attacks against Text Transformers
    Guo, Chuan
    Sablayrolles, Alexandre
    Jegou, Herve
    Kiela, Douwe
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 5747 - 5757