CNN adversarial attack mitigation using perturbed samples training

被引:9
|
作者
Hashemi, Atiye Sadat [1 ]
Mozaffari, Saeed [1 ]
机构
[1] Semnan Univ, Fac Elect & Comp Engn, Semnan, Iran
关键词
Adversarial example; Convolution neural network; Denoising autoencoder; Evasion attacks; Noisy training;
D O I
10.1007/s11042-020-10379-6
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Susceptibility to adversarial examples is one of the major concerns in convolutional neural networks (CNNs) applications. Training the model with adversarial examples, known as adversarial training, is a common countermeasure to tackle such attacks. In reality, however, defenders are uninformed about how adversarial examples are generated by the attacker. Therefore, it is pivotal to utilize more general alternatives to intrinsically improve the robustness of models. For this purpose, we train CNNs with perturbed samples manipulated by various transformations and contaminated by different noises to foster robustness of networks against adversarial attacks. This idea derived from the fact that both adversarial and noisy samples undermine the classifier accuracy. We propose combination of a convolutional denoising autoencoder with a classifier (CDAEC) as a defensive structure. The proposed method does not add to the computational cost. Experimental results on MNIST database demonstrate that the accuracy of CDAEC trained by perturbed samples against adversarial attacks was more than 71.29%.
引用
收藏
页码:22077 / 22095
页数:19
相关论文
共 50 条
  • [21] Cross-domain replay spoofing attack detection using domain adversarial training
    Wang, Hongji
    Dinkel, Heinrich
    Wang, Shuai
    Qian, Yanmin
    Yu, Kai
    INTERSPEECH 2019, 2019, : 2938 - 2942
  • [22] Semi-supervised learning using adversarial training with good and bad samples
    Li, Wenyuan
    Wang, Zichen
    Yue, Yuguang
    Li, Jiayun
    Speier, William
    Zhou, Mingyuan
    Arnold, Corey
    MACHINE VISION AND APPLICATIONS, 2020, 31 (06)
  • [23] Semi-supervised learning using adversarial training with good and bad samples
    Wenyuan Li
    Zichen Wang
    Yuguang Yue
    Jiayun Li
    William Speier
    Mingyuan Zhou
    Corey Arnold
    Machine Vision and Applications, 2020, 31
  • [24] Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition
    Lal, Sheeba
    Rehman, Saeed Ur
    Shah, Jamal Hussain
    Meraj, Talha
    Rauf, Hafiz Tayyab
    Damasevicius, Robertas
    Mohammed, Mazin Abed
    Abdulkareem, Karrar Hameed
    SENSORS, 2021, 21 (11)
  • [25] FedPGT: Prototype-based Federated Global Adversarial Training against Adversarial Attack
    Xu, ZiRong
    Lai, WeiMin
    Yan, Qiao
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 864 - 869
  • [26] Design of robust hyperspectral image classifier based on adversarial training against adversarial attack
    Park I.
    Kim S.
    Journal of Institute of Control, Robotics and Systems, 2021, 27 (06) : 389 - 400
  • [27] Training Meta-Surrogate Model for Transferable Adversarial Attack
    Qin, Yunxiao
    Xiong, Yuanhao
    Yi, Jinfeng
    Hsieh, Cho-Jui
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9516 - 9524
  • [28] Boosting Adversarial Training with Hardness-Guided Attack Strategy
    He, Shiyuan
    Wei, Jiwei
    Zhang, Chaoning
    Xu, Xing
    Song, Jingkuan
    Yang, Yang
    Shen, Heng Tao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 7748 - 7760
  • [29] Adversarial Attack and Training for Graph Convolutional Networks using Focal Loss-Projected Momentum
    Aburidi, Mohammed
    Marcia, Roummel F.
    2024 IEEE 3RD INTERNATIONAL CONFERENCE ON COMPUTING AND MACHINE INTELLIGENCE, ICMI 2024, 2024,
  • [30] AGS: Affordable and Generalizable Substitute Training for Transferable Adversarial Attack
    Wang, Ruikui
    Guo, Yuanfang
    Wang, Yunhong
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5553 - 5562