Enhancing adversarial transferability with local transformation

被引:0
|
作者
Zhang, Yang [1 ]
Hong, Jinbang [2 ]
Bai, Qing [3 ]
Liang, Haifeng [1 ]
Zhu, Peican [4 ]
Song, Qun [5 ]
机构
[1] Xian Technol Univ, Sch Optoelect Engn, Xian 710021, Shaanxi, Peoples R China
[2] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
[3] North Electroo Opt CO LTD, Xian 710043, Shaanxi, Peoples R China
[4] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Shaanxi, Peoples R China
[5] Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep neural networks; Adversarial examples; Transferable attack; Adversarial transferability; NEONATAL SLEEP;
D O I
10.1007/s40747-024-01628-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robust deep learning models have demonstrated significant applicability in real-world scenarios. The utilization of adversarial attacks plays a crucial role in assessing the robustness of these models. Among such attacks, transfer-based attacks, which leverage white-box models to generate adversarial examples, have garnered considerable attention. These transfer-based attacks have demonstrated remarkable efficiency, particularly under the black-box setting. Notably, existing transfer attacks often exploit input transformations to amplify their effectiveness. However, prevailing input transformation-based methods typically modify input images indiscriminately, overlooking regional disparities. To bolster the transferability of adversarial examples, we propose the Local Transformation Attack (LTA) based on forward class activation maps. Specifically, we first obtain future examples through accumulated momentum and compute forward class activation maps. Subsequently, we utilize these maps to identify crucial areas and apply pixel scaling for transformation. Finally, we update the adversarial examples by using the average gradient of the transformed image. Extensive experiments convincingly demonstrate the effectiveness of our proposed LTA. Compared to the current state-of-the-art attack approaches, LTA achieves an increase of 7.9% in black-box attack performance. Particularly, in the case of ensemble attacks, our method achieved an average attack success rate of 98.3%.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Cooperative and Adversarial Learning: Co-enhancing Discriminability and Transferability in Domain Adaptation
    Sun, Hui
    Xie, Zheng
    Li, Xin-Ye
    Li, Ming
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9909 - 9917
  • [32] Improving transferability of adversarial examples with powerful affine-shear transformation attack
    Wang, Xiaotong
    Huang, Chunguang
    Cheng, Hai
    COMPUTER STANDARDS & INTERFACES, 2023, 84
  • [33] Improving the Transferability of Adversarial Samples with Adversarial Transformations
    Wu, Weibin
    Su, Yuxin
    Lyu, Michael R.
    King, Irwin
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 9020 - 9029
  • [34] Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability
    Liang, Kaizhao
    Zhang, Jacky Y.
    Wang, Boxin
    Yang, Zhuolin
    Koyejo, Oluwasanmi
    Li, Bo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [35] Enhancing Adversarial Transferability With Intermediate Layer Feature Attack on Synthetic Aperture Radar Images
    Wan, Xuanshen
    Liu, Wei
    Niu, Chaoyang
    Lu, Wanjie
    Li, Yuanli
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2025, 18 : 1638 - 1655
  • [36] Ranking the Transferability of Adversarial Examples
    Levy, Moshe
    Amit, Guy
    Elovici, Yuval
    Mirsky, Yisroel
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (05)
  • [37] Exploring Transferability on Adversarial Attacks
    Alvarez, Enrique
    Alvarez, Rafael
    Cazorla, Miguel
    IEEE ACCESS, 2023, 11 : 105545 - 105556
  • [38] On the Adversarial Transferability of ConvMixer Models
    Iijima, Ryota
    Tanaka, Miki
    Echizen, Isao
    Kiya, Hitoshi
    PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 1826 - 1830
  • [39] Boosting the transferability of adversarial CAPTCHAs
    Xu, Zisheng
    Yan, Qiao
    COMPUTERS & SECURITY, 2024, 145
  • [40] Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
    Qin, Zeyu
    Fan, Yanbo
    Liu, Yi
    Shen, Li
    Zhang, Yong
    Wang, Jue
    Wu, Baoyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,