An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

被引:14
|
作者
Zhao, Pu [1 ]
Liu, Sijia [2 ]
Wang, Yanzhi [1 ]
Lin, Xue [1 ]
机构
[1] Northeastern Univ, Dept ECE, Boston, MA 02115 USA
[2] IBM Corp, Res AI, Armonk, NY 10504 USA
基金
美国国家科学基金会;
关键词
Deep Neural Networks; Adversarial Attacks; ADMM (Alternating Direction Method of Multipliers);
D O I
10.1145/3240508.3240639
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. In a successful adversarial attack, the targeted mis-classification should be achieved with the minimal distortion added. In the literature, the added distortions are usually measured by L-0, L-1, L-2, and L-infinity norms, namely, L-0, L-1, L-2, and L-infinity attacks, respectively. However, there lacks a versatile framework for all types of adversarial attacks. This work for the first time unifies the methods of generating adversarial examples by leveraging ADMM (Alternating Direction Method of Multipliers), an operator splitting optimization approach, such that L-0, L-1, L-2, and L-infinity attacks can be effectively implemented by this general framework with little modifications. Comparing with the state-of-the-art attacks in each category, our ADMM-based attacks are so far the strongest, achieving both the 100% attack success rate and the minimal distortion.
引用
收藏
页码:1065 / 1073
页数:9
相关论文
共 50 条
  • [41] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [42] Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
    Zoppi, Tommaso
    Ceccarelli, Andrea
    IEEE ACCESS, 2021, 9 : 150579 - 150591
  • [43] Improving adversarial attacks on deep neural networks via constricted gradient-based perturbations
    Xiao, Yatie
    Pun, Chi-Man
    INFORMATION SCIENCES, 2021, 571 : 104 - 132
  • [44] Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning
    Minagi, Akinori
    Hirano, Hokuto
    Takemoto, Kauzhiro
    JOURNAL OF IMAGING, 2022, 8 (02)
  • [45] Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
    Peng, Zirui
    Li, Shaofeng
    Chen, Guoxing
    Zhang, Cheng
    Zhu, Haojin
    Xue, Minhui
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13420 - 13429
  • [46] An active learning framework for adversarial training of deep neural networks
    Susmita Ghosh
    Abhiroop Chatterjee
    Lance Fiondella
    Neural Computing and Applications, 2025, 37 (9) : 6849 - 6876
  • [47] A Framework for Enhancing Deep Neural Networks Against Adversarial Malware
    Li, Deqiang
    Li, Qianmu
    Ye, Yanfang
    Xu, Shouhuai
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (01): : 736 - 750
  • [48] TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems
    Doan, Bao Gia
    Xue, Minhui
    Ma, Shiqing
    Abbasnejad, Ehsan
    Ranasinghe, Damith C.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3816 - 3830
  • [49] An ADMM-based Decentralized Voltage Management Mechanism for Distribution Networks
    Sabounchi, Moein
    Wei, Jin
    2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, CONTROL, AND COMPUTING TECHNOLOGIES FOR SMART GRIDS (SMARTGRIDCOMM), 2019,
  • [50] ADMM-Based Hyperspectral Unmixing Networks for Abundance and Endmember Estimation
    Zhou, Chao
    Rodrigues, Miguel R. D.
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60