An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

被引:14
|
作者
Zhao, Pu [1 ]
Liu, Sijia [2 ]
Wang, Yanzhi [1 ]
Lin, Xue [1 ]
机构
[1] Northeastern Univ, Dept ECE, Boston, MA 02115 USA
[2] IBM Corp, Res AI, Armonk, NY 10504 USA
基金
美国国家科学基金会;
关键词
Deep Neural Networks; Adversarial Attacks; ADMM (Alternating Direction Method of Multipliers);
D O I
10.1145/3240508.3240639
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. In a successful adversarial attack, the targeted mis-classification should be achieved with the minimal distortion added. In the literature, the added distortions are usually measured by L-0, L-1, L-2, and L-infinity norms, namely, L-0, L-1, L-2, and L-infinity attacks, respectively. However, there lacks a versatile framework for all types of adversarial attacks. This work for the first time unifies the methods of generating adversarial examples by leveraging ADMM (Alternating Direction Method of Multipliers), an operator splitting optimization approach, such that L-0, L-1, L-2, and L-infinity attacks can be effectively implemented by this general framework with little modifications. Comparing with the state-of-the-art attacks in each category, our ADMM-based attacks are so far the strongest, achieving both the 100% attack success rate and the minimal distortion.
引用
收藏
页码:1065 / 1073
页数:9
相关论文
共 50 条
  • [21] Adversarial Attacks on Deep Neural Networks for Time Series Classification
    Fawaz, Hassan Ismail
    Forestier, Germain
    Weber, Jonathan
    Idoumghar, Lhassane
    Muller, Pierre-Alain
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [22] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141
  • [23] Adversarial Evasion Attacks to Deep Neural Networks in ECR Models
    Nemoto, Shota
    Rajapaksha, Subhash
    Perouli, Despoina
    HEALTHINF: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES - VOL 5: HEALTHINF, 2021, : 135 - 141
  • [24] ADMM-based OPF Problem Against Cyber Attacks in Smart Grid
    Xu, Jiangjiao
    Li, Ke
    Abusara, Mohammad
    Zhang, Yan
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 1418 - 1423
  • [25] Universal Adversarial Attacks on Neural Networks for Power Allocation in a Massive MIMO System
    Santos, Pablo Millan
    Manoj, B. R.
    Sadeghi, Meysam
    Larsson, Erik G.
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (01) : 67 - 71
  • [26] ADMM-based deep reconstruction for limited-angle CT
    Wang, Jiaxi
    Zeng, Li
    Wang, Chengxiang
    Guo, Yumeng
    PHYSICS IN MEDICINE AND BIOLOGY, 2019, 64 (11):
  • [27] Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
    Sardar, Nida
    Khan, Sundas
    Hintze, Arend
    Mehra, Priyanka
    ENTROPY, 2023, 25 (06)
  • [28] Grasping Adversarial Attacks on Deep Convolutional Neural Networks for Cholangiocarcinoma Classification
    Diyasa, I. Gede Susrama Mas
    Wahid, Radical Rakhman
    Amiruddin, Brilian Putra
    2021 INTERNATIONAL CONFERENCE ON E-HEALTH AND BIOENGINEERING (EHB 2021), 9TH EDITION, 2021,
  • [29] GPX-ADMM-Net: ADMM-based Neural Network with Generalized Proximal Operator
    Hu, Shih-Wei
    Lin, Gang-Xuan
    Lu, Chun-Shien
    28TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2020), 2021, : 2055 - 2059
  • [30] Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks
    Liu, Jia
    Jin, Yaochu
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1778 - 1785