The Impact of Simultaneous Adversarial Attacks on Robustness of Medical Image Analysis

被引:0
|
作者
Pal, Shantanu [1 ]
Rahman, Saifur [1 ]
Beheshti, Maedeh [2 ]
Habib, Ahsan [1 ]
Jadidi, Zahra [3 ]
Karmakar, Chandan [1 ]
机构
[1] Deakin Univ, Sch Informat Technol, Melbourne, Vic 3125, Australia
[2] Crit Path Inst, Tucson, AZ 85718 USA
[3] Griffith Univ, Sch Informat & Commun Technol, Gold Coast, Qld 4222, Australia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Machine learning; Deep learning; Biomedical imaging; deep learning; medical image analysis; robustness; adversarial attacks;
D O I
10.1109/ACCESS.2024.3396566
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning models are widely used in healthcare systems. However, deep learning models are vulnerable to attacks themselves. Significantly, due to the black-box nature of the deep learning model, it is challenging to detect attacks. Furthermore, due to data sensitivity, such adversarial attacks in healthcare systems are considered potential security and privacy threats. In this paper, we provide a comprehensive analysis of adversarial attacks on medical image analysis, including two adversary methods, FGSM and PGD, applied to an entire image or partial image. The partial attack comes in various sizes, either the individual or combinational format of attack. We use three medical datasets to examine the impact of the model's accuracy and robustness. Finally, we provide a complete implementation of the attacks and discuss the results. Our results indicate the weakness and robustness of four deep learning models and exhibit how varying perturbations stimulate model behaviour regarding the specific area and critical features.
引用
收藏
页码:66478 / 66494
页数:17
相关论文
共 50 条
  • [1] Malicious Adversarial Attacks on Medical Image Analysis
    Winter, Thomas C.
    AMERICAN JOURNAL OF ROENTGENOLOGY, 2020, 215 (05) : W55 - W55
  • [2] Reply to "Malicious Adversarial Attacks on Medical Image Analysis"
    Desjardins, Benoit
    Ritenour, E. Russell
    AMERICAN JOURNAL OF ROENTGENOLOGY, 2020, 215 (05) : W56 - W56
  • [3] A Survey on Adversarial Deep Learning Robustness in Medical Image Analysis
    Apostolidis, Kyriakos D.
    Papakostas, George A.
    ELECTRONICS, 2021, 10 (17)
  • [4] Adversarial Attacks on Medical Image Classification
    Tsai, Min-Jen
    Lin, Ping-Yi
    Lee, Ming-En
    CANCERS, 2023, 15 (17)
  • [5] Challenging the Robustness of Image Registration Similarity Metrics with Adversarial Attacks
    Rexeisen, Robin
    Jiang, Xiaoyi
    BIOMEDICAL IMAGE REGISTRATION, WBIR 2024, 2025, 15249 : 112 - 126
  • [6] Adversarial attacks and adversarial robustness in computational pathology
    Narmin Ghaffari Laleh
    Daniel Truhn
    Gregory Patrick Veldhuizen
    Tianyu Han
    Marko van Treeck
    Roman D. Buelow
    Rupert Langer
    Bastian Dislich
    Peter Boor
    Volkmar Schulz
    Jakob Nikolas Kather
    Nature Communications, 13
  • [7] Adversarial attacks and adversarial robustness in computational pathology
    Ghaffari Laleh, Narmin
    Truhn, Daniel
    Veldhuizen, Gregory Patrick
    Han, Tianyu
    van Treeck, Marko
    Buelow, Roman D.
    Langer, Rupert
    Dislich, Bastian
    Boor, Peter
    Schulz, Volkmar
    Kather, Jakob Nikolas
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [8] Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks
    Smagulova, Kamilya
    Bacha, Lina
    Fouda, Mohammed E.
    Kanj, Rouwaida
    Eltawil, Ahmed
    ELECTRONICS, 2024, 13 (03)
  • [9] Towards Evaluating the Robustness of Adversarial Attacks Against Image Scaling Transformation
    ZHENG Jiamin
    ZHANG Yaoyuan
    LI Yuanzhang
    WU Shangbo
    YU Xiao
    ChineseJournalofElectronics, 2023, 32 (01) : 151 - 158
  • [10] Towards Evaluating the Robustness of Adversarial Attacks Against Image Scaling Transformation
    Zheng, Jiamin
    Zhang, Yaoyuan
    Li, Yuanzhang
    Wu, Shangbo
    Yu, Xiao
    CHINESE JOURNAL OF ELECTRONICS, 2023, 32 (01) : 151 - 158