Defense against Adversarial Attacks in Image Recognition Based on Multilayer Filters

被引:0
|
作者
Wang, Mingde [1 ]
Liu, Zhijing [1 ]
机构
[1] Xidian Univ, Comp Informat Applicat Res Ctr, Sch Comp Sci & Technol, Xian 710071, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 18期
关键词
adversarial attack; deep learning; defense method; machine learning; ROBUSTNESS;
D O I
10.3390/app14188119
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The security and privacy of a system are urgent issues in achieving secure and efficient learning-based systems. Recent studies have shown that these systems are susceptible to subtle adversarial perturbations applied to inputs. Although these perturbations are difficult for humans to detect, they can easily mislead deep learning classifiers. Noise injection, as a defense mechanism, can offer a provable defense against adversarial attacks by reducing sensitivity to subtle input changes. However, these methods face issues of computational complexity and limited adaptability. We propose a multilayer filter defense model, drawing inspiration from filter-based image denoising techniques. This model inserts a filtering layer after the input layer and before the convolutional layer, and incorporates noise injection techniques during the training process. This model substantially enhances the resilience of image classification systems to adversarial attacks. We also investigated the impact of various filter combinations, filter area sizes, standard deviations, and filter layers on the effectiveness of defense. The experimental results indicate that, across the MNIST, CIFAR10, and CIFAR100 datasets, the multilayer filter defense model achieves the highest average accuracy when employing a double-layer Gaussian filter (filter area size of 3x3, standard deviation of 1). We compared our method with two filter-based defense models, and the experimental results demonstrated that our method attained an average accuracy of 71.9%, effectively enhancing the robustness of the image recognition classifier against adversarial attacks. This method not only performs well on small-scale datasets but also exhibits robustness on large-scale datasets (miniImageNet) and modern models (EfficientNet and WideResNet).
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Deep image prior based defense against adversarial examples
    Dai, Tao
    Feng, Yan
    Chen, Bin
    Lu, Jian
    Xia, Shu-Tao
    PATTERN RECOGNITION, 2022, 122
  • [32] Defense Against Adversarial Attacks on Audio DeepFake Detection
    Kawa, Piotr
    Plata, Marcin
    Syga, Piotr
    INTERSPEECH 2023, 2023, : 5276 - 5280
  • [33] Defensive Bit Planes: Defense Against Adversarial Attacks
    Tripathi, Achyut Mani
    Behera, Swarup Ranjan
    Paul, Konark
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [34] Deep Learning Defense Method Against Adversarial Attacks
    Wang, Ling
    Zhang, Cheng
    Liu, Jie
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3667 - 3671
  • [35] Cyclic Defense GAN Against Speech Adversarial Attacks
    Esmaeilpour, Mohammad
    Cardinal, Patrick
    Koerich, Alessandro Lameiras
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1769 - 1773
  • [36] Defense-VAE: A Fast and Accurate Defense Against Adversarial Attacks
    Li, Xiang
    Ji, Shihao
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 1168 : 191 - 207
  • [37] Detection defense against adversarial attacks with saliency map
    Ye, Dengpan
    Chen, Chuanxi
    Liu, Changrui
    Wang, Hao
    Jiang, Shunzhi
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (12) : 10193 - 10210
  • [38] Symmetry Defense Against CNN Adversarial Perturbation Attacks
    Lindqvist, Blerta
    INFORMATION SECURITY, ISC 2023, 2023, 14411 : 142 - 160
  • [39] Universal Inverse Perturbation Defense Against Adversarial Attacks
    Chen J.-Y.
    Wu C.-A.
    Zheng H.-B.
    Wang W.
    Wen H.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (10): : 2172 - 2187
  • [40] DEFENSE AGAINST ADVERSARIAL ATTACKS ON SPOOFING COUNTERMEASURES OF ASV
    Wu, Haibin
    Liu, Songxiang
    Meng, Helen
    Lee, Hung-yi
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6564 - 6568