Enhancing adversarial transferability with local transformation

被引:0
|
作者
Zhang, Yang [1 ]
Hong, Jinbang [2 ]
Bai, Qing [3 ]
Liang, Haifeng [1 ]
Zhu, Peican [4 ]
Song, Qun [5 ]
机构
[1] Xian Technol Univ, Sch Optoelect Engn, Xian 710021, Shaanxi, Peoples R China
[2] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
[3] North Electroo Opt CO LTD, Xian 710043, Shaanxi, Peoples R China
[4] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Shaanxi, Peoples R China
[5] Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep neural networks; Adversarial examples; Transferable attack; Adversarial transferability; NEONATAL SLEEP;
D O I
10.1007/s40747-024-01628-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robust deep learning models have demonstrated significant applicability in real-world scenarios. The utilization of adversarial attacks plays a crucial role in assessing the robustness of these models. Among such attacks, transfer-based attacks, which leverage white-box models to generate adversarial examples, have garnered considerable attention. These transfer-based attacks have demonstrated remarkable efficiency, particularly under the black-box setting. Notably, existing transfer attacks often exploit input transformations to amplify their effectiveness. However, prevailing input transformation-based methods typically modify input images indiscriminately, overlooking regional disparities. To bolster the transferability of adversarial examples, we propose the Local Transformation Attack (LTA) based on forward class activation maps. Specifically, we first obtain future examples through accumulated momentum and compute forward class activation maps. Subsequently, we utilize these maps to identify crucial areas and apply pixel scaling for transformation. Finally, we update the adversarial examples by using the average gradient of the transformed image. Extensive experiments convincingly demonstrate the effectiveness of our proposed LTA. Compared to the current state-of-the-art attack approaches, LTA achieves an increase of 7.9% in black-box attack performance. Particularly, in the case of ensemble attacks, our method achieved an average attack success rate of 98.3%.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] DIB-UAP: enhancing the transferability of universal adversarial perturbation via deep information bottleneck
    Wang, Yang
    Zheng, Yunfei
    Chen, Lei
    Yang, Zhen
    Cao, Tieyong
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (05) : 6825 - 6837
  • [42] Enhancing Cross-Task Black-Box Transferability of Adversarial Examples with Dispersion Reduction
    Lu, Yantao
    Jia, Yunhan
    Wang, Jianyu
    Li, Bai
    Chai, Weiheng
    Carin, Lawrence
    Velipasalar, Senem
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 937 - 946
  • [43] Improving the adversarial transferability with relational graphs ensemble adversarial attack
    Pi, Jiatian
    Luo, Chaoyang
    Xia, Fen
    Jiang, Ning
    Wu, Haiying
    Wu, Zhiyou
    FRONTIERS IN NEUROSCIENCE, 2023, 16
  • [44] An approach to improve transferability of adversarial examples
    Zhang, Weihan
    Guo, Ying
    PHYSICAL COMMUNICATION, 2024, 64
  • [45] Remix: Towards the transferability of adversarial examples
    Zhao, Hongzhi
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Cai, Xin
    NEURAL NETWORKS, 2023, 163 : 367 - 378
  • [46] Dynamic defenses and the transferability of adversarial examples
    Thomas, Sam
    Koleini, Farnoosh
    Tabrizi, Nasseh
    2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA, 2022, : 276 - 284
  • [47] Rethinking the Backward Propagation for Adversarial Transferability
    Wang, Xiaosen
    Tong, Kangheng
    He, Kun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [48] An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability
    Chen, Bin
    Yin, Jiali
    Chen, Shukai
    Chen, Bohao
    Liu, Ximeng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4466 - 4475
  • [49] A Geometric Perspective on the Transferability of Adversarial Directions
    Charles, Zachary
    Rosenberg, Harrison
    Papailiopoulos, Dimitris
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [50] Backpropagation Path Search On Adversarial Transferability
    Xu, Zhuoer
    Gu, Zhangxuan
    Zhang, Jianping
    Cui, Shiwen
    Meng, Changhua
    Wang, Weiqiang
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4640 - 4650