HYBRID DEFENSE FOR DEEP NEURAL NETWORKS: AN INTEGRATION OF DETECTING AND CLEANING ADVERSARIAL PERTURBATIONS

被引:3
|
作者
Fan, Weiqi [1 ]
Sun, Guangling [1 ]
Su, Yuying [1 ]
Liu, Zhi [1 ]
Lu, Xiaofeng [1 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial perturbations; Hybrid defense; Deep neural network; Computer vision;
D O I
10.1109/ICMEW.2019.00-85
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, deep neural networks (DNN) have achieved significant success in computer vision. However, recent investigations have shown that DNN models are highly vulnerable to an input adversarial example. How to defense against adversarial examples is an essential issue to improve the robustness of DNN models. In this paper, we present a hybrid defense framework that integrates detecting and cleaning adversarial perturbations to protect DNN. Specifically, the detecting part consists of statistical detector and Gaussian noise injection detector which are adaptive to perturbation characteristics to inspect adversarial examples, and the cleaning part is a deep residual generative network (ResGN) for removing or mitigating the adversarial perturbations. The parameters of ResGN are optimized by minimizing a joint loss including a pixel loss, a texture loss and a task loss. In the experiments, we evaluate our approach on ImageNet and the comprehensive results validate its robustness against current representative attacks.
引用
收藏
页码:210 / 215
页数:6
相关论文
共 50 条
  • [31] Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction
    Liang, Bin
    Li, Hongcheng
    Su, Miaoqiang
    Li, Xirong
    Shi, Wenchang
    Wang, Xiaofeng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (01) : 72 - 85
  • [32] Detection of backdoor attacks using targeted universal adversarial perturbations for deep neural networks
    Qu, Yubin
    Huang, Song
    Chen, Xiang
    Wang, Xingya
    Yao, Yongming
    JOURNAL OF SYSTEMS AND SOFTWARE, 2024, 207
  • [33] Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks
    Deka, Shankar A.
    Stipanovic, Dusan M.
    Tomlin, Claire J.
    IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2022, 30 (06) : 2615 - 2629
  • [34] Fortifying Deep Neural Networks for Industrial Applications: Feature Map Fusion for Adversarial Defense
    Ali, Mohsin
    Raza, Haider
    Gan, John Q.
    2024 IEEE 19TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS, ICIEA 2024, 2024,
  • [35] ADVERSARIAL DEFENSE FOR DEEP SPEAKER RECOGNITION USING HYBRID ADVERSARIAL TRAINING
    Pal, Monisankha
    Jati, Arindam
    Peri, Raghuveer
    Hsu, Chin-Cheng
    AbdAlmageed, Wael
    Narayanan, Shrikanth
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6164 - 6168
  • [36] Adversarial Attacks on Graph Neural Networks: Perturbations and their Patterns
    Zuegner, Daniel
    Borchert, Oliver
    Akbarnejad, Amir
    Guennemann, Stephan
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2020, 14 (05)
  • [37] Improving adversarial attacks on deep neural networks via constricted gradient-based perturbations
    Xiao, Yatie
    Pun, Chi-Man
    INFORMATION SCIENCES, 2021, 571 : 104 - 132
  • [38] Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep Neural Networks
    Chen, Zitao
    Dash, Pritam
    Pattabiraman, Karthik
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 689 - 703
  • [39] Detecting and Localizing Adversarial Nodes Using Neural Networks
    Li, Gangqiang
    Wu, Sissi Xiaoxiao
    Zhang, Shengli
    Wai, Hoi-To
    Scaglione, Anna
    2018 IEEE 19TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC), 2018, : 86 - 90
  • [40] Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations
    Tsai, Yu-Lin
    Hsu, Chia-Yi
    Yu, Chia-Mu
    Chen, Pin-Yu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34