The Impact of Simultaneous Adversarial Attacks on Robustness of Medical Image Analysis

被引:0
|
作者
Pal, Shantanu [1 ]
Rahman, Saifur [1 ]
Beheshti, Maedeh [2 ]
Habib, Ahsan [1 ]
Jadidi, Zahra [3 ]
Karmakar, Chandan [1 ]
机构
[1] Deakin Univ, Sch Informat Technol, Melbourne, Vic 3125, Australia
[2] Crit Path Inst, Tucson, AZ 85718 USA
[3] Griffith Univ, Sch Informat & Commun Technol, Gold Coast, Qld 4222, Australia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Machine learning; Deep learning; Biomedical imaging; deep learning; medical image analysis; robustness; adversarial attacks;
D O I
10.1109/ACCESS.2024.3396566
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning models are widely used in healthcare systems. However, deep learning models are vulnerable to attacks themselves. Significantly, due to the black-box nature of the deep learning model, it is challenging to detect attacks. Furthermore, due to data sensitivity, such adversarial attacks in healthcare systems are considered potential security and privacy threats. In this paper, we provide a comprehensive analysis of adversarial attacks on medical image analysis, including two adversary methods, FGSM and PGD, applied to an entire image or partial image. The partial attack comes in various sizes, either the individual or combinational format of attack. We use three medical datasets to examine the impact of the model's accuracy and robustness. Finally, we provide a complete implementation of the attacks and discuss the results. Our results indicate the weakness and robustness of four deep learning models and exhibit how varying perturbations stimulate model behaviour regarding the specific area and critical features.
引用
收藏
页码:66478 / 66494
页数:17
相关论文
共 50 条
  • [21] Evaluating Robustness Against Adversarial Attacks: A Representational Similarity Analysis Approach
    Liu, Chenyu
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [22] Robustness of Generative Adversarial CLIPs Against Single-Character Adversarial Attacks in Text-to-Image Generation
    Chanakya, Patibandla
    Harsha, Putla
    Pratap Singh, Krishna
    IEEE ACCESS, 2024, 12 : 162551 - 162563
  • [23] On the robustness of skeleton detection against adversarial attacks
    Bai, Xiuxiu
    Yang, Ming
    Liu, Zhe
    NEURAL NETWORKS, 2020, 132 : 416 - 427
  • [24] RADAR-MIX: How to Uncover Adversarial Attacks in Medical Image Analysis through Explainability
    de Aguiar, Erikson J.
    Traina, Caetano, Jr.
    Traina, Agma J. M.
    2024 IEEE 37TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS 2024, 2024, : 436 - 441
  • [25] On the Robustness of Semantic Segmentation Models to Adversarial Attacks
    Arnab, Anurag
    Miksik, Ondrej
    Torr, Philip H. S.
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 888 - 897
  • [26] Robustness of Sketched Linear Classifiers to Adversarial Attacks
    Mahadevan, Ananth
    Merchant, Arpit
    Wang, Yanhao
    Mathioudakis, Michael
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4319 - 4323
  • [27] ROBUSTNESS OF SAAK TRANSFORM AGAINST ADVERSARIAL ATTACKS
    Ramanathan, Thiyagarajan
    Manimaran, Abinaya
    You, Suya
    Kuo, C-C Jay
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2531 - 2535
  • [28] Robustness Against Adversarial Attacks Using Dimensionality
    Chattopadhyay, Nandish
    Chatterjee, Subhrojyoti
    Chattopadhyay, Anupam
    SECURITY, PRIVACY, AND APPLIED CRYPTOGRAPHY ENGINEERING, SPACE 2021, 2022, 13162 : 226 - 241
  • [29] On the Robustness of Bayesian Neural Networks to Adversarial Attacks
    Bortolussi, Luca
    Carbone, Ginevra
    Laurenti, Luca
    Patane, Andrea
    Sanguinetti, Guido
    Wicker, Matthew
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14
  • [30] On the Robustness of Semantic Segmentation Models to Adversarial Attacks
    Arnab, Anurag
    Miksik, Ondrej
    Torr, Philip H. S.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (12) : 3040 - 3053