Enhancing Transferability of Adversarial Examples Through Mixed-Frequency Inputs

被引:1
|
作者
Qian, Yaguan [1 ]
Chen, Kecheng [1 ]
Wang, Bin [2 ]
Gu, Zhaoquan [3 ]
Ji, Shouling [4 ]
Wang, Wei [5 ]
Zhang, Yanchun [6 ,7 ]
机构
[1] Zhejiang Univ Sci & Technol, Sch Big Data Sci, Hangzhou 310023, Peoples R China
[2] Zhejiang Key Lab Artificial Intelligence Things AI, Hangzhou 310053, Peoples R China
[3] Harbin Inst Technol Shenzhen, Sch Comp Sci & Technol, Shenzhen 518071, Peoples R China
[4] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310058, Peoples R China
[5] Xi An Jiao Tong Univ, Minist Educ Key Lab Intelligent Networks & Network, Xian 710049, Peoples R China
[6] Zhejiang Normal Univ, Sch Comp Sci & Technol, Jinhua 321004, Peoples R China
[7] Victoria Univ, Sch Comp Sci & Math, Melbourne, Vic 8001, Australia
基金
中国国家自然科学基金;
关键词
Frequency-domain analysis; Closed box; Noise; Glass box; Training; Optimization; Computational modeling; Security vulnerability; adversarial examples; transfer-based attack; Fourier transform;
D O I
10.1109/TIFS.2024.3430508
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recent studies have shown that Deep Neural Networks (DNNs) are easily deceived by adversarial examples, revealing their serious vulnerability. Due to the transferability, adversarial examples can attack across multiple models with different architectures, called transfer-based black-box attacks. Input transformation is one of the most effective methods to improve adversarial transferability. In particular, the attacks fusing other categories of image information reveal the potential direction of adversarial attacks. However, the current techniques rely on input transformations in the spatial domain, which ignore the frequency information of the image and limit its transferability. To tackle this issue, we propose Mixed-Frequency Inputs (MFI) based on a frequency domain perspective. MFI alleviates the overfitting of adversarial examples to the source model by considering high-frequency components from various kinds of images in the process of calculating the gradient. By accumulating these high-frequency components, MFI acquires a more steady gradient direction in each iteration, leading to the discovery of better local maxima and enhancing transferability. Extensive experimental results on the ImageNet-compatible datasets demonstrate that MFI outperforms existing transform-based attacks with a clear margin on both Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), which proves MFI is more suitable for realistic black-box scenarios.
引用
收藏
页码:7633 / 7645
页数:13
相关论文
共 50 条
  • [21] LRS: Enhancing Adversarial Transferability through Lipschitz Regularized Surrogate
    Wu, Tao
    Luo, Tie
    Wunsch, Donald C., II
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 6135 - 6143
  • [22] Improving the transferability of adversarial examples with path tuning
    Li, Tianyu
    Li, Xiaoyu
    Ke, Wuping
    Tian, Xuwei
    Zheng, Desheng
    Lu, Chao
    APPLIED INTELLIGENCE, 2024, 54 (23) : 12194 - 12214
  • [23] Improving Transferability of Adversarial Examples with Input Diversity
    Xie, Cihang
    Zhang, Zhishuai
    Zhou, Yuyin
    Bai, Song
    Wang, Jianyu
    Ren, Zhou
    Yuille, Alan
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2725 - 2734
  • [24] Improving the transferability of adversarial examples through black-box feature attacks
    Wang, Maoyuan
    Wang, Jinwei
    Ma, Bin
    Luo, Xiangyang
    NEUROCOMPUTING, 2024, 595
  • [25] Improving the Transferability of Adversarial Examples with Diverse Gradients
    Cao, Yangjie
    Wang, Haobo
    Zhu, Chenxi
    Zhuang, Yan
    Li, Jie
    Chen, Xianfu
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [26] Admix: Enhancing the Transferability of Adversarial Attacks
    Wang, Xiaosen
    He, Xuanran
    Wang, Jingdong
    He, Kun
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16138 - 16147
  • [27] Enhancing the Adversarial Transferability with Channel Decomposition
    Lin B.
    Gao F.
    Zeng W.
    Chen J.
    Zhang C.
    Zhu Q.
    Zhou Y.
    Zheng D.
    Qiu Q.
    Yang S.
    Computer Systems Science and Engineering, 2023, 46 (03): : 3075 - 3085
  • [28] Enhancing adversarial transferability with local transformation
    Zhang, Yang
    Hong, Jinbang
    Bai, Qing
    Liang, Haifeng
    Zhu, Peican
    Song, Qun
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (01)
  • [29] Enhancing Cross-Task Black-Box Transferability of Adversarial Examples with Dispersion Reduction
    Lu, Yantao
    Jia, Yunhan
    Wang, Jianyu
    Li, Bai
    Chai, Weiheng
    Carin, Lawrence
    Velipasalar, Senem
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 937 - 946
  • [30] Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup
    Byun, Junyoung
    Kwon, Myung-Joon
    Cho, Seungju
    Kim, Yoonji
    Kim, Changick
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24648 - 24657