GNP ATTACK: TRANSFERABLE ADVERSARIAL EXAMPLES VIA GRADIENT NORM PENALTY

被引:1
|
作者
Wu, Tao [1 ]
Luo, Tie [1 ]
Wunsch, Donald C. [2 ]
机构
[1] Missouri Univ Sci & Technol, Dept Comp Sci, Rolla, MO 65409 USA
[2] Missouri Univ Sci & Technol, Dept Elect & Comp Engn, Rolla, MO USA
关键词
Adversarial machine learning; Transferability; Deep neural networks; Input gradient regularization;
D O I
10.1109/ICIP49359.2023.10223158
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial examples (AE) with good transferability enable practical black-box attacks on diverse target models, where insider knowledge about the target models is not required. Previous methods often generate AE with no or very limited transferability; that is, they easily overfit to the particular architecture and feature representation of the source, white-box model and the generated AE barely work for target, blackbox models. In this paper, we propose a novel approach to enhance AE transferability using Gradient Norm Penalty (GNP). It drives the loss function optimization procedure to converge to a flat region of local optima in the loss landscape. By attacking 11 state-of-the-art (SOTA) deep learning models and 6 advanced defense methods, we empirically show that GNP is very effective in generating AE with high transferability. We also demonstrate that it is very flexible in that it can be easily integrated with other gradient based methods for stronger transfer-based attacks.
引用
收藏
页码:3110 / 3114
页数:5
相关论文
共 50 条
  • [31] Towards Transferable Adversarial Examples Using Meta Learning
    Fan, Mingyuan
    Yin, Jia-Li
    Liu, Ximeng
    Guo, Wenzhong
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT I, 2022, 13155 : 178 - 192
  • [32] Common knowledge learning for generating transferable adversarial examples
    Yang, Ruijie
    Guo, Yuanfang
    Wang, Junfu
    Zhou, Jiantao
    Wang, Yunhong
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (10)
  • [33] Improving transferable adversarial attack for vision transformers via global attention and local drop
    Tuo Li
    Yahong Han
    Multimedia Systems, 2023, 29 : 3467 - 3480
  • [34] Towards Transferable Unrestricted Adversarial Examples with Minimum Changes
    Liu, Fangcheng
    Zhang, Chao
    Zhang, Hongyang
    2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML, 2023, : 327 - 338
  • [35] Generating Transferable Adversarial Examples against Vision Transformers
    Wang, Yuxuan
    Wang, Jiakai
    Yin, Zinxin
    Gong, Ruihao
    Wang, Jingyi
    Liu, Aishan
    Liu, Xianglong
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5181 - 5190
  • [36] Improving transferable adversarial attack for vision transformers via global attention and local drop
    Li, Tuo
    Han, Yahong
    MULTIMEDIA SYSTEMS, 2023, 29 (06) : 3467 - 3480
  • [37] Adversarial Examples for Image Cropping: Gradient-Based and Bayesian-Optimized Approaches for Effective Adversarial Attack
    Yoshida, Masatomo
    Namura, Haruto
    Okuda, Masahiro
    IEEE ACCESS, 2024, 12 : 86541 - 86552
  • [38] Boosting the Transferability of Adversarial Examples with Gradient-Aligned Ensemble Attack for Speaker Recognition
    Li, Zhuhai
    Zhang, Jie
    Guo, Wu
    Wu, Haochen
    INTERSPEECH 2024, 2024, : 532 - 536
  • [39] Meta Gradient Adversarial Attack
    Yuan, Zheng
    Zhang, Jie
    Jia, Yunpei
    Tan, Chuanqi
    Xue, Tao
    Shan, Shiguang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7728 - 7737
  • [40] An Enhanced Transferable Adversarial Attack Against Object Detection
    Shi, Guoqiang
    Lin, Zhi
    Peng, Anjie
    Zeng, Hui
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,