Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging

被引:1
|
作者
Kanca, Elif [1 ]
Ayas, Selen [2 ]
Kablan, Elif Baykal [1 ]
Ekinci, Murat [2 ]
机构
[1] Karadeniz Tech Univ, Dept Software Engn, Trabzon, Turkiye
[2] Karadeniz Tech Univ, Dept Comp Engn, Trabzon, Turkiye
关键词
Adversarial attacks; Adversarial defense; Vision transformer; Medical image classification; DIABETIC-RETINOPATHY; VALIDATION;
D O I
10.1007/s11517-024-03226-5
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Deep neural networks (DNNs) have demonstrated exceptional performance in medical image analysis. However, recent studies have uncovered significant vulnerabilities in DNN models, particularly their susceptibility to adversarial attacks that manipulate these models into making inaccurate predictions. Vision Transformers (ViTs), despite their advanced capabilities in medical imaging tasks, have not been thoroughly evaluated for their robustness against such attacks in this domain. This study addresses this research gap by conducting an extensive analysis of various adversarial attacks on ViTs specifically within medical imaging contexts. We explore adversarial training as a potential defense mechanism and assess the resilience of ViT models against state-of-the-art adversarial attacks and defense strategies using publicly available benchmark medical image datasets. Our findings reveal that ViTs are vulnerable to adversarial attacks even with minimal perturbations, although adversarial training significantly enhances their robustness, achieving over 80% classification accuracy. Additionally, we perform a comparative analysis with state-of-the-art convolutional neural network models, highlighting the unique strengths and weaknesses of ViTs in handling adversarial threats. This research advances the understanding of ViTs robustness in medical imaging and provides insights into their practical deployment in real-world scenarios.Graphical Abstract(left).
引用
收藏
页码:673 / 690
页数:18
相关论文
共 50 条
  • [41] Enhancing Robustness of Indoor Robotic Navigation with Free-Space Segmentation Models Against Adversarial Attacks
    An, Qiyuan
    Sevastopoulos, Christos
    Makedon, Fillia
    2023 SEVENTH IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING, IRC 2023, 2023, : 389 - 394
  • [42] DEFENDING AGAINST ADVERSARIAL ATTACKS ON MEDICAL IMAGING AI SYSTEM, CLASSIFICATION OR DETECTION?
    Li, Xin
    Pan, Deng
    Zhu, Dongxiao
    2021 IEEE 18TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), 2021, : 1677 - 1681
  • [43] Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
    Nesti, Federico
    Rossolini, Giulio
    Nair, Saasha
    Biondi, Alessandro
    Buttazzo, Giorgio
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2826 - 2835
  • [44] The Impact of Simultaneous Adversarial Attacks on Robustness of Medical Image Analysis
    Pal, Shantanu
    Rahman, Saifur
    Beheshti, Maedeh
    Habib, Ahsan
    Jadidi, Zahra
    Karmakar, Chandan
    IEEE ACCESS, 2024, 12 : 66478 - 66494
  • [45] On Evaluating Adversarial Robustness of Large Vision-Language Models
    Zhao, Yunqing
    Pang, Tianyu
    Du, Chao
    Yang, Xiao
    Li, Chongxuan
    Cheung, Ngai-Man
    Lin, Min
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [46] On the Robustness of Intrusion Detection Systems for Vehicles Against Adversarial Attacks
    Choi, Jeongseok
    Kim, Hyoungshick
    INFORMATION SECURITY APPLICATIONS, 2021, 13009 : 39 - 50
  • [47] FoolChecker: A platform to evaluate the robustness of images against adversarial attacks
    Liu Hui
    Zhao Bo
    Huang Linquan
    Guo Jiabao
    Liu Yifan
    NEUROCOMPUTING, 2020, 412 : 216 - 225
  • [48] Relative Robustness of Quantized Neural Networks Against Adversarial Attacks
    Duncan, Kirsty
    Komendantskaya, Ekaterina
    Stewart, Robert
    Lones, Michael
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [49] Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
    Niu, Zhenxing
    Sun, Yuyao
    Miao, Qiguang
    Jin, Rong
    Hua, Gang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 7589 - 7605
  • [50] Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks
    Levine, Alexander
    Feizi, Soheil
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 3938 - 3946