GNP ATTACK: TRANSFERABLE ADVERSARIAL EXAMPLES VIA GRADIENT NORM PENALTY

被引:1
|
作者
Wu, Tao [1 ]
Luo, Tie [1 ]
Wunsch, Donald C. [2 ]
机构
[1] Missouri Univ Sci & Technol, Dept Comp Sci, Rolla, MO 65409 USA
[2] Missouri Univ Sci & Technol, Dept Elect & Comp Engn, Rolla, MO USA
关键词
Adversarial machine learning; Transferability; Deep neural networks; Input gradient regularization;
D O I
10.1109/ICIP49359.2023.10223158
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial examples (AE) with good transferability enable practical black-box attacks on diverse target models, where insider knowledge about the target models is not required. Previous methods often generate AE with no or very limited transferability; that is, they easily overfit to the particular architecture and feature representation of the source, white-box model and the generated AE barely work for target, blackbox models. In this paper, we propose a novel approach to enhance AE transferability using Gradient Norm Penalty (GNP). It drives the loss function optimization procedure to converge to a flat region of local optima in the loss landscape. By attacking 11 state-of-the-art (SOTA) deep learning models and 6 advanced defense methods, we empirically show that GNP is very effective in generating AE with high transferability. We also demonstrate that it is very flexible in that it can be easily integrated with other gradient based methods for stronger transfer-based attacks.
引用
收藏
页码:3110 / 3114
页数:5
相关论文
共 50 条
  • [41] Towards Defending against Adversarial Examples via Attack-Invariant Features
    Zhou, Dawei
    Liu, Tongliang
    Han, Bo
    Wang, Nannan
    Peng, Chunlei
    Gao, Xinbo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [42] Adversarial Attack Against Convolutional Neural Network via Gradient Approximation
    Wang, Zehao
    Li, Xiaoran
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024, 2024, 14867 : 221 - 232
  • [43] Transferable Adversarial Attack for Both Vision Transformers and Convolutional Networks via Momentum Integrated Gradients
    Ma, Wenshuo
    Li, Yidong
    Jia, Xiaofeng
    Xu, Wei
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4607 - 4616
  • [44] Generating Transferable Adversarial Examples From the Perspective of Ensemble and Distribution
    Zhang, Huangyi
    Liu, Ximeng
    PROCEEDINGS OF 2024 3RD INTERNATIONAL CONFERENCE ON CYBER SECURITY, ARTIFICIAL INTELLIGENCE AND DIGITAL ECONOMY, CSAIDE 2024, 2024, : 173 - 177
  • [45] Transferable adversarial examples can efficiently fool topic models
    Wang, Zhen
    Zheng, Yitao
    Zhu, Hai
    Yang, Chang
    Chen, Tianyi
    COMPUTERS & SECURITY, 2022, 118
  • [46] Dynamic loss yielding more transferable targeted adversarial examples
    Zhang, Ming
    Chen, Yongkang
    Li, Hu
    Qian, Cheng
    Kuang, Xiaohui
    NEUROCOMPUTING, 2024, 590
  • [47] Feature Space Perturbations Yield More Transferable Adversarial Examples
    Inkawhich, Nathan
    Wen, Wei
    Li, Hai
    Chen, Yiran
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7059 - 7067
  • [48] Efficient and Transferable Adversarial Examples from Bayesian Neural Networks
    Gubri, Martin
    Cordy, Maxime
    Papadakis, Mike
    Le Traon, Yves
    Sen, Koushik
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 738 - 748
  • [49] Imbalanced Fault Classification of Bearing via Wasserstein Generative Adversarial Networks with Gradient Penalty
    Han, Baokun
    Jia, Sixiang
    Liu, Guifang
    Wang, Jinrui
    SHOCK AND VIBRATION, 2020, 2020
  • [50] Transferable Sparse Adversarial Attack on Modulation Recognition With Generative Networks
    Jiang, Zenghui
    Zeng, Weijun
    Zhou, Xingyu
    Chen, Pu
    Yin, Shenqian
    IEEE COMMUNICATIONS LETTERS, 2024, 28 (05) : 999 - 1003