An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

被引:14
|
作者
Zhao, Pu [1 ]
Liu, Sijia [2 ]
Wang, Yanzhi [1 ]
Lin, Xue [1 ]
机构
[1] Northeastern Univ, Dept ECE, Boston, MA 02115 USA
[2] IBM Corp, Res AI, Armonk, NY 10504 USA
基金
美国国家科学基金会;
关键词
Deep Neural Networks; Adversarial Attacks; ADMM (Alternating Direction Method of Multipliers);
D O I
10.1145/3240508.3240639
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. In a successful adversarial attack, the targeted mis-classification should be achieved with the minimal distortion added. In the literature, the added distortions are usually measured by L-0, L-1, L-2, and L-infinity norms, namely, L-0, L-1, L-2, and L-infinity attacks, respectively. However, there lacks a versatile framework for all types of adversarial attacks. This work for the first time unifies the methods of generating adversarial examples by leveraging ADMM (Alternating Direction Method of Multipliers), an operator splitting optimization approach, such that L-0, L-1, L-2, and L-infinity attacks can be effectively implemented by this general framework with little modifications. Comparing with the state-of-the-art attacks in each category, our ADMM-based attacks are so far the strongest, achieving both the 100% attack success rate and the minimal distortion.
引用
收藏
页码:1065 / 1073
页数:9
相关论文
共 50 条
  • [31] Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks
    Krithivasan, Sarada
    Sen, Sanchari
    Raghunathan, Anand
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (11) : 4129 - 4141
  • [32] Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?
    Siddique, Ayesha
    Hoque, Khaza Anuarul
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 364 - 369
  • [33] Mitigating Adversarial Attacks for Deep Neural Networks by Input Deformation and Augmentation
    Qiu, Pengfei
    Wang, Qian
    Wang, Dongsheng
    Lyu, Yongqiang
    Lu, Zhaojun
    Qu, Gang
    2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 157 - 162
  • [34] Fast adversarial attacks to deep neural networks through gradual sparsification
    Amini, Sajjad
    Heshmati, Alireza
    Ghaemmaghami, Shahrokh
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127
  • [35] Robust Adversarial Attacks on Imperfect Deep Neural Networks in Fault Classification
    Jiang, Xiaoyu
    Kong, Xiangyin
    Zheng, Junhua
    Ge, Zhiqiang
    Zhang, Xinmin
    Song, Zhihuan
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (12) : 14297 - 14307
  • [36] Compressive imaging for defending deep neural networks from adversarial attacks
    Kravets, Vladislav
    Javidi, Bahram
    Stern, Adrian
    OPTICS LETTERS, 2021, 46 (08) : 1951 - 1954
  • [37] Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks
    Wang, Siyue
    Wang, Xiao
    Zhao, Pu
    Wen, Wujie
    Kaeli, David
    Chin, Peter
    Lin, Xue
    2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
  • [38] Imperceptible CMOS camera dazzle for adversarial attacks on deep neural networks
    Stein, Zvi
    Stern, Adrian
    arXiv, 2023,
  • [39] Simple Black-Box Adversarial Attacks on Deep Neural Networks
    Narodytska, Nina
    Kasiviswanathan, Shiva
    2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1310 - 1318
  • [40] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,