Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity

被引:3
|
作者
Aquino, Bernardo [1 ]
Rahnama, Arash [2 ]
Seiler, Peter [3 ]
Lin, Lizhen [4 ]
Gupta, Vijay [1 ]
机构
[1] Univ Notre Dame, Dept Elect Engn, Notre Dame, IN 46656 USA
[2] Amazon Inc, New York, NY 10001 USA
[3] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
[4] Univ Notre Dame, Dept Appl Computat Math & Stat, Notre Dame, IN 46656 USA
来源
关键词
Biological neural networks; Robustness; Training; Perturbation methods; Standards; Neurons; Optimization; Adversarial Attacks; Deep Neural Networks; Robust Design; Passivity Theory; Spectral Regularization;
D O I
10.1109/LCSYS.2022.3150719
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples can easily degrade the classification performance in neural networks. Empirical methods for promoting robustness to such examples have been proposed, but often lack both analytical insights and formal guarantees. Recently, some robustness certificates have appeared in the literature based on system theoretic notions. This letter proposes an incremental dissipativity-based robustness certificate for neural networks in the form of a linear matrix inequality for each layer. We also propose a sufficient spectral norm bound for this certificate which is scalable to neural networks with multiple layers. We demonstrate the improved performance against adversarial attacks on a feed-forward neural network trained on MNIST and an Alexnet trained using CIFAR-10.
引用
收藏
页码:2341 / 2346
页数:6
相关论文
共 50 条
  • [21] On the robustness of skeleton detection against adversarial attacks
    Bai, Xiuxiu
    Yang, Ming
    Liu, Zhe
    NEURAL NETWORKS, 2020, 132 : 416 - 427
  • [22] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [23] ROBUSTNESS OF SAAK TRANSFORM AGAINST ADVERSARIAL ATTACKS
    Ramanathan, Thiyagarajan
    Manimaran, Abinaya
    You, Suya
    Kuo, C-C Jay
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2531 - 2535
  • [24] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [25] GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks
    Zhang, Xiang
    Zitnik, Marinka
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [26] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141
  • [27] Robust Heterogeneous Graph Neural Networks against Adversarial Attacks
    Zhang, Mengmei
    Wang, Xiao
    Zhu, Meiqi
    Shi, Chuan
    Zhang, Zhiqiang
    Zhou, Jun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 4363 - 4370
  • [28] RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks
    Marchisio, Alberto
    De Marco, Antonio
    Colucci, Alessio
    Martina, Maurizio
    Shafique, Muhammad
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [29] Using Options to Improve Robustness of Imitation Learning Against Adversarial Attacks
    Dasgupta, Prithviraj
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS III, 2021, 11746
  • [30] Securing Networks Against Adversarial Domain Name System Tunneling Attacks Using Hybrid Neural Networks
    Ness, Stephanie
    IEEE Access, 2025, 13 : 46697 - 46709