Enhancing adversarial transferability with local transformation

被引:0
|
作者
Zhang, Yang [1 ]
Hong, Jinbang [2 ]
Bai, Qing [3 ]
Liang, Haifeng [1 ]
Zhu, Peican [4 ]
Song, Qun [5 ]
机构
[1] Xian Technol Univ, Sch Optoelect Engn, Xian 710021, Shaanxi, Peoples R China
[2] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
[3] North Electroo Opt CO LTD, Xian 710043, Shaanxi, Peoples R China
[4] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Shaanxi, Peoples R China
[5] Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep neural networks; Adversarial examples; Transferable attack; Adversarial transferability; NEONATAL SLEEP;
D O I
10.1007/s40747-024-01628-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robust deep learning models have demonstrated significant applicability in real-world scenarios. The utilization of adversarial attacks plays a crucial role in assessing the robustness of these models. Among such attacks, transfer-based attacks, which leverage white-box models to generate adversarial examples, have garnered considerable attention. These transfer-based attacks have demonstrated remarkable efficiency, particularly under the black-box setting. Notably, existing transfer attacks often exploit input transformations to amplify their effectiveness. However, prevailing input transformation-based methods typically modify input images indiscriminately, overlooking regional disparities. To bolster the transferability of adversarial examples, we propose the Local Transformation Attack (LTA) based on forward class activation maps. Specifically, we first obtain future examples through accumulated momentum and compute forward class activation maps. Subsequently, we utilize these maps to identify crucial areas and apply pixel scaling for transformation. Finally, we update the adversarial examples by using the average gradient of the transformed image. Extensive experiments convincingly demonstrate the effectiveness of our proposed LTA. Compared to the current state-of-the-art attack approaches, LTA achieves an increase of 7.9% in black-box attack performance. Particularly, in the case of ensemble attacks, our method achieved an average attack success rate of 98.3%.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Enhancing the Transferability of Adversarial Examples with Feature Transformation
    Xu, Hao-Qi
    Hu, Cong
    Yin, He-Feng
    MATHEMATICS, 2022, 10 (16)
  • [2] Admix: Enhancing the Transferability of Adversarial Attacks
    Wang, Xiaosen
    He, Xuanran
    Wang, Jingdong
    He, Kun
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16138 - 16147
  • [3] Enhancing the Adversarial Transferability with Channel Decomposition
    Lin B.
    Gao F.
    Zeng W.
    Chen J.
    Zhang C.
    Zhu Q.
    Zhou Y.
    Zheng D.
    Qiu Q.
    Yang S.
    Computer Systems Science and Engineering, 2023, 46 (03): : 3075 - 3085
  • [4] Enhancing visual adversarial transferability via affine transformation of intermediate-level perturbations
    Li, Qizhang
    Guo, Yiwen
    Zuo, Wangmeng
    PATTERN RECOGNITION LETTERS, 2025, 191 : 51 - 57
  • [5] Enhancing Transferability of Adversarial Examples with Spatial Momentum
    Wang, Guoqiu
    Yan, Huanqian
    Wei, Xingxing
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, PRCV 2022, 2022, 13534 : 593 - 604
  • [6] Enhancing the transferability of adversarial examples on vision transformers
    Guan, Yujiao
    Yang, Haoyu
    Qu, Xiaotong
    Wang, Xiaodong
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [7] Structure Invariant Transformation for better Adversarial Transferability
    Wang, Xiaosen
    Zhang, Zeliang
    Zhang, Jianping
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4584 - 4596
  • [8] Enhancing the Transferability of Adversarial Point Clouds by Initializing Transferable Adversarial Noise
    Chen, Hai
    Zhao, Shu
    Yan, Yuanting
    Qian, Fulan
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 201 - 205
  • [9] Enhancing the transferability of adversarial samples with random noise techniques
    Huang, Jiahao
    Wen, Mi
    Wei, Minjie
    Bi, Yanbing
    COMPUTERS & SECURITY, 2024, 136
  • [10] Enhancing adversarial transferability with partial blocks on vision transformer
    Han, Yanyang
    Liu, Ju
    Liu, Xiaoxi
    Jiang, Xiao
    Gu, Lingchen
    Gao, Xuesong
    Chen, Weiqiang
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (22): : 20249 - 20262