Defense against Adversarial Attacks in Image Recognition Based on Multilayer Filters

被引:0
|
作者
Wang, Mingde [1 ]
Liu, Zhijing [1 ]
机构
[1] Xidian Univ, Comp Informat Applicat Res Ctr, Sch Comp Sci & Technol, Xian 710071, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 18期
关键词
adversarial attack; deep learning; defense method; machine learning; ROBUSTNESS;
D O I
10.3390/app14188119
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The security and privacy of a system are urgent issues in achieving secure and efficient learning-based systems. Recent studies have shown that these systems are susceptible to subtle adversarial perturbations applied to inputs. Although these perturbations are difficult for humans to detect, they can easily mislead deep learning classifiers. Noise injection, as a defense mechanism, can offer a provable defense against adversarial attacks by reducing sensitivity to subtle input changes. However, these methods face issues of computational complexity and limited adaptability. We propose a multilayer filter defense model, drawing inspiration from filter-based image denoising techniques. This model inserts a filtering layer after the input layer and before the convolutional layer, and incorporates noise injection techniques during the training process. This model substantially enhances the resilience of image classification systems to adversarial attacks. We also investigated the impact of various filter combinations, filter area sizes, standard deviations, and filter layers on the effectiveness of defense. The experimental results indicate that, across the MNIST, CIFAR10, and CIFAR100 datasets, the multilayer filter defense model achieves the highest average accuracy when employing a double-layer Gaussian filter (filter area size of 3x3, standard deviation of 1). We compared our method with two filter-based defense models, and the experimental results demonstrated that our method attained an average accuracy of 71.9%, effectively enhancing the robustness of the image recognition classifier against adversarial attacks. This method not only performs well on small-scale datasets but also exhibits robustness on large-scale datasets (miniImageNet) and modern models (EfficientNet and WideResNet).
引用
收藏
页数:19
相关论文
共 50 条
  • [21] Defense Against Adversarial Attacks by Reconstructing Images
    Zhang, Shudong
    Gao, Haichang
    Rao, Qingxun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 6117 - 6129
  • [22] Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
    Zhang, Haichao
    Wang, Jianyu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [23] DIFFender: Diffusion-Based Adversarial Defense Against Patch Attacks
    Kang, Caixin
    Dong, Yinpeng
    Wang, Zhengyi
    Ruan, Shouwei
    Chen, Yubo
    Su, Hang
    Wei, Xingxing
    COMPUTER VISION - ECCV 2024, PT LII, 2025, 15110 : 130 - 147
  • [24] Sparsity-based Defense against Adversarial Attacks on Linear Classifiers
    Marzi, Zhinus
    Gopalakrishnan, Soorya
    Madhow, Upamanyu
    Pedarsani, Ramtin
    2018 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2018, : 31 - 35
  • [25] TENSORSHIELD: Tensor-based Defense Against Adversarial Attacks on Images
    Entezari, Negin
    Papalexakis, Evangelos E.
    2022 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2022,
  • [26] Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser
    Joshi, Sonal
    Kataria, Saurabh
    Shao, Yiwen
    Zelasko, Piotr
    Villalba, Jesus
    Khudanpur, Sanjeev
    Dehak, Najim
    INTERSPEECH 2022, 2022, : 5035 - 5039
  • [27] The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial Attacks
    Frosio, Iuri
    Kautz, Jan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 4067 - 4076
  • [28] Defense Against Adversarial Attacks Using Topology Aligning Adversarial Training
    Kuang, Huafeng
    Liu, Hong
    Lin, Xianming
    Ji, Rongrong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3659 - 3673
  • [29] Image Defense Algorithm Against Adversarial Attacks Based on Low-Rank Dimensionality Reduction and Sparse Reconstruction
    Zhang Xifan
    Yu Lingzhi
    LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (12)
  • [30] Adversarial Attacks and Defense for CNN Based Power Quality Recognition in Smart Grid
    Tian, Jiwei
    Wang, Buhong
    Li, Jing
    Wang, Zhen
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (02): : 807 - 819