MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense

被引:24
|
作者
Sengupta, Sailik [1 ]
Chakraborti, Tathagata [2 ]
Kambhampati, Subbarao [1 ]
机构
[1] Arizona State Univ, Tempe, AZ 85281 USA
[2] IBM Res, Cambridge, MA USA
来源
关键词
D O I
10.1007/978-3-030-32430-8_28
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Present attack methods can make state-of-the-art classification systems based on deep neural networks mis-classify every adversarially modified test example. The design of general defense strategies against a wide range of such attacks still remains a challenging problem. In this paper, we draw inspiration from the fields of cybersecurity and multi-agent systems and propose to leverage the concept of Moving Target Defense (MTD) in designing a meta-defense for 'boosting' the robustness of an ensemble of deep neural networks (DNNs) for visual classification tasks against such adversarial attacks. To classify an input image at test time, a constituent network is randomly selected based on a mixed policy. To obtain this policy, we formulate the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg Game (BSG). We empirically show that our approach MTDeep, reduces misclassification on perturbed images for various datasets such as MNIST, FashionMNIST, and ImageNet while maintaining high classification accuracy on legitimate test images. We then demonstrate that our framework, being the first meta-defense technique, can be used in conjunction with any existing defense mechanism to provide more resilience against adversarial attacks that can be afforded by these defense mechanisms alone. Lastly, to quantify the increase in robustness of an ensemble-based classification system when we use MTDeep, we analyze the properties of a set of DNNs and introduce the concept of differential immunity that formalizes the notion of attack transferability.
引用
收藏
页码:479 / 491
页数:13
相关论文
共 50 条
  • [21] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [22] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [23] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141
  • [24] Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
    Papernot, Nicolas
    McDaniel, Patrick
    Wu, Xi
    Jha, Somesh
    Swami, Ananthram
    2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, : 582 - 597
  • [25] Instance-based defense against adversarial attacks in Deep Reinforcement Learning
    Garcia, Javier
    Sagredo, Ismael
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 107
  • [26] DDSA: A Defense Against Adversarial Attacks Using Deep Denoising Sparse Autoencoder
    Bakhti, Yassine
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforges, Olivier
    IEEE ACCESS, 2019, 7 : 160397 - 160407
  • [27] Defense Strategies Against Adversarial Jamming Attacks via Deep Reinforcement Learning
    Wang, Feng
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    2020 54TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2020, : 336 - 341
  • [28] Defense against adversarial attacks: robust and efficient compressed optimized neural networks
    Insaf Kraidia
    Afifa Ghenai
    Samir Brahim Belhaouari
    Scientific Reports, 14
  • [29] Defense against adversarial attacks: robust and efficient compressed optimized neural networks
    Kraidia, Insaf
    Ghenai, Afifa
    Belhaouari, Samir Brahim
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [30] Defense against Adversarial Attacks with an Induced Class
    Xu, Zhi
    Wang, Jun
    Pu, Jian
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,