Defense against Adversarial Attacks in Image Recognition Based on Multilayer Filters

被引:0
|
作者
Wang, Mingde [1 ]
Liu, Zhijing [1 ]
机构
[1] Xidian Univ, Comp Informat Applicat Res Ctr, Sch Comp Sci & Technol, Xian 710071, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 18期
关键词
adversarial attack; deep learning; defense method; machine learning; ROBUSTNESS;
D O I
10.3390/app14188119
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The security and privacy of a system are urgent issues in achieving secure and efficient learning-based systems. Recent studies have shown that these systems are susceptible to subtle adversarial perturbations applied to inputs. Although these perturbations are difficult for humans to detect, they can easily mislead deep learning classifiers. Noise injection, as a defense mechanism, can offer a provable defense against adversarial attacks by reducing sensitivity to subtle input changes. However, these methods face issues of computational complexity and limited adaptability. We propose a multilayer filter defense model, drawing inspiration from filter-based image denoising techniques. This model inserts a filtering layer after the input layer and before the convolutional layer, and incorporates noise injection techniques during the training process. This model substantially enhances the resilience of image classification systems to adversarial attacks. We also investigated the impact of various filter combinations, filter area sizes, standard deviations, and filter layers on the effectiveness of defense. The experimental results indicate that, across the MNIST, CIFAR10, and CIFAR100 datasets, the multilayer filter defense model achieves the highest average accuracy when employing a double-layer Gaussian filter (filter area size of 3x3, standard deviation of 1). We compared our method with two filter-based defense models, and the experimental results demonstrated that our method attained an average accuracy of 71.9%, effectively enhancing the robustness of the image recognition classifier against adversarial attacks. This method not only performs well on small-scale datasets but also exhibits robustness on large-scale datasets (miniImageNet) and modern models (EfficientNet and WideResNet).
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Defense against Adversarial Attacks on Image Recognition Systems Using an Autoencoder
    Platonov, V. V.
    Grigorjeva, N. M.
    AUTOMATIC CONTROL AND COMPUTER SCIENCES, 2023, 57 (08) : 989 - 995
  • [2] Defense against Adversarial Attacks on Image Recognition Systems Using an Autoencoder
    V. V. Platonov
    N. M. Grigorjeva
    Automatic Control and Computer Sciences, 2023, 57 : 989 - 995
  • [3] Adaptive Image Reconstruction for Defense Against Adversarial Attacks
    Yang, Yanan
    Shih, Frank Y.
    Chang, I-Cheng
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (12)
  • [4] LanCeX: A Versatile and Lightweight Defense Method against Condensed Adversarial Attacks in Image and Audio Recognition
    Xu, Zirui
    Yu, Fuxun
    Liu, Chenchen
    Chen, Xiang
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2023, 22 (01)
  • [5] Image Super-Resolution as a Defense Against Adversarial Attacks
    Mustafa, Aamir
    Khan, Salman H.
    Hayat, Munawar
    Shen, Jianbing
    Shao, Ling
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 1711 - 1724
  • [6] Encoding Generative Adversarial Networks for Defense Against Image Classification Attacks
    Perez-Bravo, Jose M.
    Rodriguez-Rodriguez, Jose A.
    Garcia-Gonzalez, Jorge
    Molina-Cabello, Miguel A.
    Thurnhofer-Hemsi, Karl
    Lopez-Rubio, Ezequiel
    BIO-INSPIRED SYSTEMS AND APPLICATIONS: FROM ROBOTICS TO AMBIENT INTELLIGENCE, PT II, 2022, 13259 : 163 - 172
  • [7] Deep Image Restoration Model: A Defense Method Against Adversarial Attacks
    Ali, Kazim
    Quershi, Adnan N.
    Bin Arifin, Ahmad Alauddin
    Bhatti, Muhammad Shahid
    Sohail, Abid
    Hassan, Rohail
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 71 (02): : 2209 - 2224
  • [8] Defense against adversarial attacks by low-level image transformations
    Yin, Zhaoxia
    Wang, Hua
    Wang, Jie
    Tang, Jin
    Wang, Wenzhong
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2020, 35 (10) : 1453 - 1466
  • [9] Deblurring as a Defense against Adversarial Attacks
    Duckworth, William, III
    Liao, Weixian
    Yu, Wei
    2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET, 2023, : 61 - 67
  • [10] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350