CNN adversarial attack mitigation using perturbed samples training

被引:9
|
作者
Hashemi, Atiye Sadat [1 ]
Mozaffari, Saeed [1 ]
机构
[1] Semnan Univ, Fac Elect & Comp Engn, Semnan, Iran
关键词
Adversarial example; Convolution neural network; Denoising autoencoder; Evasion attacks; Noisy training;
D O I
10.1007/s11042-020-10379-6
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Susceptibility to adversarial examples is one of the major concerns in convolutional neural networks (CNNs) applications. Training the model with adversarial examples, known as adversarial training, is a common countermeasure to tackle such attacks. In reality, however, defenders are uninformed about how adversarial examples are generated by the attacker. Therefore, it is pivotal to utilize more general alternatives to intrinsically improve the robustness of models. For this purpose, we train CNNs with perturbed samples manipulated by various transformations and contaminated by different noises to foster robustness of networks against adversarial attacks. This idea derived from the fact that both adversarial and noisy samples undermine the classifier accuracy. We propose combination of a convolutional denoising autoencoder with a classifier (CDAEC) as a defensive structure. The proposed method does not add to the computational cost. Experimental results on MNIST database demonstrate that the accuracy of CDAEC trained by perturbed samples against adversarial attacks was more than 71.29%.
引用
收藏
页码:22077 / 22095
页数:19
相关论文
共 50 条
  • [1] CNN adversarial attack mitigation using perturbed samples training
    Atiye Sadat Hashemi
    Saeed Mozaffari
    Multimedia Tools and Applications, 2021, 80 : 22077 - 22095
  • [2] Intelligent Image Synthesis to Attack a Segmentation CNN Using Adversarial Learning
    Chen, Liang
    Bentley, Paul
    Mori, Kensaku
    Misawa, Kazunari
    Fujiwara, Michitaka
    Rueckert, Daniel
    SIMULATION AND SYNTHESIS IN MEDICAL IMAGING, SASHIMI 2019, 2019, 11827 : 90 - 99
  • [3] Detection by Attack: Detecting Adversarial Samples by Undercover Attack
    Zhou, Qifei
    Zhang, Rong
    Wu, Bo
    Li, Weiping
    Mo, Tong
    COMPUTER SECURITY - ESORICS 2020, PT II, 2020, 12309 : 146 - 164
  • [4] Robust Regularization with Adversarial Labelling of Perturbed Samples
    Guo, Xiaohui
    Zhang, Richong
    Zheng, Yaowei
    Mao, Yongyi
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2490 - 2496
  • [5] Threats to Adversarial Training for IDSs and Mitigation
    Chaitou, Hassan
    Robert, Thomas
    Leneutre, Jean
    Pautet, Laurent
    SECRYPT : PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON SECURITY AND CRYPTOGRAPHY, 2022, : 226 - 236
  • [6] STEALTHY BACKDOOR ATTACK WITH ADVERSARIAL TRAINING
    Feng, Le
    Li, Sheng
    Qian, Zhenxing
    Zhang, Xinpeng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2969 - 2973
  • [7] Attack-less adversarial training for a robust adversarial defense
    Ho, Jiacang
    Lee, Byung-Gook
    Kang, Dae-Ki
    APPLIED INTELLIGENCE, 2022, 52 (04) : 4364 - 4381
  • [8] Attack-less adversarial training for a robust adversarial defense
    Jiacang Ho
    Byung-Gook Lee
    Dae-Ki Kang
    Applied Intelligence, 2022, 52 : 4364 - 4381
  • [9] Knowledge distillation vulnerability of DeiT through CNN adversarial attack
    Hong, Inpyo
    Choi, Chang
    NEURAL COMPUTING & APPLICATIONS, 2023, 37 (12): : 7721 - 7731
  • [10] A Robust CNN for Malware Classification against Executable Adversarial Attack
    Zhang, Yunchun
    Jiang, Jiaqi
    Yi, Chao
    Li, Hai
    Min, Shaohui
    Zuo, Ruifeng
    An, Zhenzhou
    Yu, Yongtao
    ELECTRONICS, 2024, 13 (05)