Push & Pull: Transferable Adversarial Examples With Attentive Attack

被引:29
|
作者
Gao, Lianli [1 ,2 ,3 ]
Huang, Zijie [2 ,3 ]
Song, Jingkuan [1 ]
Yang, Yang [2 ,3 ]
Shen, Heng Tao [2 ,3 ]
机构
[1] Univ Elect Sci & Technol China, Inst Neurol, Sichuan Prov Peoples Hosp, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Future Media Ctr, Chengdu 611731, Peoples R China
[3] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
基金
中国国家自然科学基金;
关键词
Perturbation methods; Feature extraction; Computational modeling; Task analysis; Predictive models; Neural networks; Iterative methods; Image classification; adversarial attack; transferability; targeted attack;
D O I
10.1109/TMM.2021.3079723
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Targeted attack aims to mislead the classification model to a specific class, and it can be further divided into black-box and white-box targeted attack depending on whether the classification model is known. A growing number of approaches rely on disrupting the image representations to craft adversarial examples. However, this type of methods often suffer from either low white-box targeted attack success rate or poor black-box targeted attack transferability. To address these problems, we propose a Transferable Attentive Attack (TAA) method which adds perturbation to clean images based on the attended regions and features. This is motivated by one important observation that deep-learning based classification models (or even shallow-learning based models like SIFT) make the prediction mainly based on the informative and discriminative regions of an image. Specifically, the corresponding features of the informative regions are firstly extracted, and the anchor image's features are iteratively "pushed" away from the source class and simultaneously "pulled" closer to the target class along with attacking. Moreover, we introduce a new strategy that the attack selects the centroids of source and target class cluster as the input of triplet loss to achieve high transferability. Experimental results demonstrate that our method improves the transferability of adversarial example, while maintaining higher success rate for white-box targeted attacks compared with the state-of-the-arts. In particular, TAA attacks on image-representation based task like VQA also result in a significant performance drop in terms of accuracy.
引用
收藏
页码:2329 / 2338
页数:10
相关论文
共 50 条
  • [11] Transferable adversarial attack on image tampering localization
    Cao, Gang
    Wang, Yuqi
    Zhu, Haochen
    Lou, Zijie
    Yu, Lifang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 102
  • [12] Diffusion Models for Imperceptible and Transferable Adversarial Attack
    Chen, Jianqi
    Chen, Hao
    Chen, Keyan
    Zhang, Yilan
    Zou, Zhengxia
    Shi, Zhenwei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (02) : 961 - 977
  • [13] Transferable adversarial examples based on global smooth perturbations
    Liu, Yujia
    Jiang, Ming
    Jiang, Tingting
    COMPUTERS & SECURITY, 2022, 121
  • [14] Towards Transferable Adversarial Examples Using Meta Learning
    Fan, Mingyuan
    Yin, Jia-Li
    Liu, Ximeng
    Guo, Wenzhong
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT I, 2022, 13155 : 178 - 192
  • [15] Common knowledge learning for generating transferable adversarial examples
    Yang, Ruijie
    Guo, Yuanfang
    Wang, Junfu
    Zhou, Jiantao
    Wang, Yunhong
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (10)
  • [16] Learning Transferable Adversarial Examples via Ghost Networks
    Li, Yingwei
    Bai, Song
    Zhou, Yuyin
    Xie, Cihang
    Zhang, Zhishuai
    Yuille, Alan
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 11458 - 11465
  • [17] Towards Transferable Unrestricted Adversarial Examples with Minimum Changes
    Liu, Fangcheng
    Zhang, Chao
    Zhang, Hongyang
    2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML, 2023, : 327 - 338
  • [18] Generating Transferable Adversarial Examples against Vision Transformers
    Wang, Yuxuan
    Wang, Jiakai
    Yin, Zinxin
    Gong, Ruihao
    Wang, Jingyi
    Liu, Aishan
    Liu, Xianglong
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5181 - 5190
  • [19] An Enhanced Transferable Adversarial Attack Against Object Detection
    Shi, Guoqiang
    Lin, Zhi
    Peng, Anjie
    Zeng, Hui
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [20] Generating Transferable Adversarial Examples From the Perspective of Ensemble and Distribution
    Zhang, Huangyi
    Liu, Ximeng
    PROCEEDINGS OF 2024 3RD INTERNATIONAL CONFERENCE ON CYBER SECURITY, ARTIFICIAL INTELLIGENCE AND DIGITAL ECONOMY, CSAIDE 2024, 2024, : 173 - 177