Defense against Adversarial Attacks in Image Recognition Based on Multilayer Filters

被引:0
|
作者
Wang, Mingde [1 ]
Liu, Zhijing [1 ]
机构
[1] Xidian Univ, Comp Informat Applicat Res Ctr, Sch Comp Sci & Technol, Xian 710071, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 18期
关键词
adversarial attack; deep learning; defense method; machine learning; ROBUSTNESS;
D O I
10.3390/app14188119
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The security and privacy of a system are urgent issues in achieving secure and efficient learning-based systems. Recent studies have shown that these systems are susceptible to subtle adversarial perturbations applied to inputs. Although these perturbations are difficult for humans to detect, they can easily mislead deep learning classifiers. Noise injection, as a defense mechanism, can offer a provable defense against adversarial attacks by reducing sensitivity to subtle input changes. However, these methods face issues of computational complexity and limited adaptability. We propose a multilayer filter defense model, drawing inspiration from filter-based image denoising techniques. This model inserts a filtering layer after the input layer and before the convolutional layer, and incorporates noise injection techniques during the training process. This model substantially enhances the resilience of image classification systems to adversarial attacks. We also investigated the impact of various filter combinations, filter area sizes, standard deviations, and filter layers on the effectiveness of defense. The experimental results indicate that, across the MNIST, CIFAR10, and CIFAR100 datasets, the multilayer filter defense model achieves the highest average accuracy when employing a double-layer Gaussian filter (filter area size of 3x3, standard deviation of 1). We compared our method with two filter-based defense models, and the experimental results demonstrated that our method attained an average accuracy of 71.9%, effectively enhancing the robustness of the image recognition classifier against adversarial attacks. This method not only performs well on small-scale datasets but also exhibits robustness on large-scale datasets (miniImageNet) and modern models (EfficientNet and WideResNet).
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [42] Instance-based defense against adversarial attacks in Deep Reinforcement Learning
    Garcia, Javier
    Sagredo, Ismael
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 107
  • [43] Transformer Based Defense GAN Against Palm-Vein Adversarial Attacks
    Li, Yantao
    Ruan, Song
    Qin, Huafeng
    Deng, Shaojiang
    El-Yacoubi, Mounim A.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1509 - 1523
  • [44] Defense against Adversarial Patch Attacks for Aerial Image Semantic Segmentation by Robust Feature Extraction
    Wang, Zhen
    Wang, Buhong
    Zhang, Chuanlei
    Liu, Yaohui
    REMOTE SENSING, 2023, 15 (06)
  • [45] Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification
    Khamaiseh, Samer Y.
    Bagagem, Derek
    Al-Alaj, Abdullah
    Mancino, Mathew
    Alomari, Hakam W.
    IEEE ACCESS, 2022, 10 : 102266 - 102291
  • [46] Defending Against Adversarial Fingerprint Attacks Based on Deep Image Prior
    Yoo, Hwajung
    Hong, Pyo Min
    Kim, Taeyong
    Yoon, Jung Won
    Lee, Youn Kyu
    IEEE ACCESS, 2023, 11 : 78713 - 78725
  • [47] ENSEMBLE ADVERSARIAL TRAINING BASED DEFENSE AGAINST ADVERSARIAL ATTACKS FOR MACHINE LEARNING-BASED INTRUSION DETECTION SYSTEM
    Haroon, M. S.
    Ali, H. M.
    NEURAL NETWORK WORLD, 2023, 33 (05) : 317 - 336
  • [48] AGS: Attribution Guided Sharpening as a Defense Against Adversarial Attacks
    Tobia, Javier Perez
    Braun, Phillip
    Narayan, Apurva
    ADVANCES IN INTELLIGENT DATA ANALYSIS XX, IDA 2022, 2022, 13205 : 225 - 236
  • [49] Defense-PointNet: Protecting PointNet Against Adversarial Attacks
    Zhang, Yu
    Liang, Gongbo
    Salem, Tawfiq
    Jacobs, Nathan
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 5654 - 5660
  • [50] Local Gradients Smoothing: Defense against localized adversarial attacks
    Naseer, Muzammal
    Khan, Salman H.
    Porikli, Fatih
    2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 1300 - 1307