Defense Against Adversarial Attacks Based on Stochastic Descent Sign Activation Networks on Medical Images

被引:5
|
作者
Yang, Yanan [1 ]
Shih, Frank Y. [1 ,2 ]
Roshan, Usman [1 ]
机构
[1] New Jersey Inst Technol, Dept Comp Sci, Newark, NJ 07102 USA
[2] Asia Univ, Dept Comp Sci & Informat Engn, Taichung, Taiwan
关键词
Robust machine learning; adversarial attack; medical AI imaging system; medical image classification;
D O I
10.1142/S0218001422540052
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning techniques in medical imaging systems are accurate, but minor perturbations in the data known as adversarial attacks can fool them. These attacks make the systems vulnerable to fraud and deception, and thus a significant challenge has been posed in practice. We present the gradient-free trained sign activation networks to detect and deter adversarial attacks on medical imaging AI systems. Experimental results show that a higher distortion value is required to attack our proposed model than the other existing state-of-the-art models on MRI, Chest X-ray, and Histopathology image datasets, where our model outperforms the best and is even twice superior. The average accuracy of our model in classifying the adversarial examples is 88.89%, whereas those for MLP and LeNet are 81.48%, and that of ResNet18 is 38.89%. It is concluded that the sign network is a solution to defend adversarial attacks due to high distortion and high accuracy on transferability. Our work is a significant step towards safe and secure medical AI systems.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Defense against adversarial attacks in traffic sign images identification based on 5G
    Fei Wu
    Limin Xiao
    Wenxue Yang
    Jinbin Zhu
    EURASIP Journal on Wireless Communications and Networking, 2020
  • [2] Defense against adversarial attacks in traffic sign images identification based on 5G
    Wu, Fei
    Xiao, Limin
    Yang, Wenxue
    Zhu, Jinbin
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2020, 2020 (01)
  • [3] Defense Against Adversarial Attacks by Reconstructing Images
    Zhang, Shudong
    Gao, Haichang
    Rao, Qingxun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 6117 - 6129
  • [4] Robust convolutional neural networks against adversarial attacks on medical images
    Shi, Xiaoshuang
    Peng, Yifan
    Chen, Qingyu
    Keenan, Tiarnan
    Thavikulwat, Alisa T.
    Lee, Sungwon
    Tang, Yuxing
    Chew, Emily Y.
    Summers, Ronald M.
    Lu, Zhiyong
    PATTERN RECOGNITION, 2022, 132
  • [5] TENSORSHIELD: Tensor-based Defense Against Adversarial Attacks on Images
    Entezari, Negin
    Papalexakis, Evangelos E.
    2022 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2022,
  • [6] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [7] Defense Mechanism Against Adversarial Attacks Using Density-based Representation of Images
    Huang, Yen-Ting
    Liao, Wen-Hung
    Huang, Chen-Wei
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 3499 - 3504
  • [8] AdvCapsNet: To defense adversarial attacks based on Capsule networks*
    Li, Yueqiao
    Su, Hang
    Zhu, Jun
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 75
  • [9] A Data Augmentation-Based Defense Method Against Adversarial Attacks in Neural Networks
    Zeng, Yi
    Qiu, Han
    Memmi, Gerard
    Qiu, Meikang
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT II, 2020, 12453 : 274 - 289
  • [10] Deblurring as a Defense against Adversarial Attacks
    Duckworth, William, III
    Liao, Weixian
    Yu, Wei
    2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET, 2023, : 61 - 67