Enhancing adversarial transferability with local transformation

被引:0
|
作者
Zhang, Yang [1 ]
Hong, Jinbang [2 ]
Bai, Qing [3 ]
Liang, Haifeng [1 ]
Zhu, Peican [4 ]
Song, Qun [5 ]
机构
[1] Xian Technol Univ, Sch Optoelect Engn, Xian 710021, Shaanxi, Peoples R China
[2] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
[3] North Electroo Opt CO LTD, Xian 710043, Shaanxi, Peoples R China
[4] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Shaanxi, Peoples R China
[5] Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep neural networks; Adversarial examples; Transferable attack; Adversarial transferability; NEONATAL SLEEP;
D O I
10.1007/s40747-024-01628-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robust deep learning models have demonstrated significant applicability in real-world scenarios. The utilization of adversarial attacks plays a crucial role in assessing the robustness of these models. Among such attacks, transfer-based attacks, which leverage white-box models to generate adversarial examples, have garnered considerable attention. These transfer-based attacks have demonstrated remarkable efficiency, particularly under the black-box setting. Notably, existing transfer attacks often exploit input transformations to amplify their effectiveness. However, prevailing input transformation-based methods typically modify input images indiscriminately, overlooking regional disparities. To bolster the transferability of adversarial examples, we propose the Local Transformation Attack (LTA) based on forward class activation maps. Specifically, we first obtain future examples through accumulated momentum and compute forward class activation maps. Subsequently, we utilize these maps to identify crucial areas and apply pixel scaling for transformation. Finally, we update the adversarial examples by using the average gradient of the transformed image. Extensive experiments convincingly demonstrate the effectiveness of our proposed LTA. Compared to the current state-of-the-art attack approaches, LTA achieves an increase of 7.9% in black-box attack performance. Particularly, in the case of ensemble attacks, our method achieved an average attack success rate of 98.3%.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] ENHANCING ADVERSARIAL TRANSFERABILITY IN OBJECT DETECTION WITH BIDIRECTIONAL FEATURE DISTORTION
    Ding, Xinlong
    Chen, Jiansheng
    Yu, Hongwei
    Shang, Yu
    Ma, Huimin
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 5525 - 5529
  • [22] LRS: Enhancing Adversarial Transferability through Lipschitz Regularized Surrogate
    Wu, Tao
    Luo, Tie
    Wunsch, Donald C., II
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 6135 - 6143
  • [23] FDT: Improving the transferability of adversarial examples with frequency domain transformation
    Ling, Jie
    Chen, Jinhui
    Li, Honglei
    COMPUTERS & SECURITY, 2024, 144
  • [24] Enhancing transferability of adversarial examples via rotation-invariant attacks
    Duan, Yexin
    Zou, Junhua
    Zhou, Xingyu
    Zhang, Wu
    Zhang, Jin
    Pan, Zhisong
    IET COMPUTER VISION, 2022, 16 (01) : 1 - 11
  • [25] Enhancing adversarial attack transferability with multi-scale feature attack
    Sun, Caixia
    Zou, Lian
    Fan, Cien
    Shi, Yu
    Liu, Yifeng
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2021, 19 (02)
  • [26] Enhancing the Transferability of Adversarial Examples Based on Nesterov Momentum for Recommendation Systems
    Qian, Fulan
    Yuan, Bei
    Chen, Hai
    Chen, Jie
    Lian, Defu
    Zhao, Shu
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (05) : 1276 - 1287
  • [27] Enhancing the Transferability of Adversarial Attacks via Multi-Feature Attention
    Zheng, Desheng
    Ke, Wuping
    Li, Xiaoyu
    Duan, Yaoxin
    Yin, Guangqiang
    Min, Fan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 1462 - 1474
  • [28] Enhancing Transferability of Adversarial Examples Through Mixed-Frequency Inputs
    Qian, Yaguan
    Chen, Kecheng
    Wang, Bin
    Gu, Zhaoquan
    Ji, Shouling
    Wang, Wei
    Zhang, Yanchun
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 7633 - 7645
  • [29] Enhancing transferability of adversarial examples with pixel-level scale variation
    Mao, Zhongshu
    Lu, Yiqin
    Cheng, Zhe
    Shen, Xiong
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2023, 118
  • [30] Efficient Transferability of Generative Perturbations with Salient Feature Disruption and Adversarial Transformation
    Li, Huanhuan
    Huang, He
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 4589 - 4594