Enhancing Transferability of Adversarial Examples Through Mixed-Frequency Inputs

被引:1
|
作者
Qian, Yaguan [1 ]
Chen, Kecheng [1 ]
Wang, Bin [2 ]
Gu, Zhaoquan [3 ]
Ji, Shouling [4 ]
Wang, Wei [5 ]
Zhang, Yanchun [6 ,7 ]
机构
[1] Zhejiang Univ Sci & Technol, Sch Big Data Sci, Hangzhou 310023, Peoples R China
[2] Zhejiang Key Lab Artificial Intelligence Things AI, Hangzhou 310053, Peoples R China
[3] Harbin Inst Technol Shenzhen, Sch Comp Sci & Technol, Shenzhen 518071, Peoples R China
[4] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310058, Peoples R China
[5] Xi An Jiao Tong Univ, Minist Educ Key Lab Intelligent Networks & Network, Xian 710049, Peoples R China
[6] Zhejiang Normal Univ, Sch Comp Sci & Technol, Jinhua 321004, Peoples R China
[7] Victoria Univ, Sch Comp Sci & Math, Melbourne, Vic 8001, Australia
基金
中国国家自然科学基金;
关键词
Frequency-domain analysis; Closed box; Noise; Glass box; Training; Optimization; Computational modeling; Security vulnerability; adversarial examples; transfer-based attack; Fourier transform;
D O I
10.1109/TIFS.2024.3430508
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recent studies have shown that Deep Neural Networks (DNNs) are easily deceived by adversarial examples, revealing their serious vulnerability. Due to the transferability, adversarial examples can attack across multiple models with different architectures, called transfer-based black-box attacks. Input transformation is one of the most effective methods to improve adversarial transferability. In particular, the attacks fusing other categories of image information reveal the potential direction of adversarial attacks. However, the current techniques rely on input transformations in the spatial domain, which ignore the frequency information of the image and limit its transferability. To tackle this issue, we propose Mixed-Frequency Inputs (MFI) based on a frequency domain perspective. MFI alleviates the overfitting of adversarial examples to the source model by considering high-frequency components from various kinds of images in the process of calculating the gradient. By accumulating these high-frequency components, MFI acquires a more steady gradient direction in each iteration, leading to the discovery of better local maxima and enhancing transferability. Extensive experimental results on the ImageNet-compatible datasets demonstrate that MFI outperforms existing transform-based attacks with a clear margin on both Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), which proves MFI is more suitable for realistic black-box scenarios.
引用
收藏
页码:7633 / 7645
页数:13
相关论文
共 50 条
  • [1] Improving the transferability of adversarial examples through semantic-mixup inputs
    Gan, Fuquan
    Wo, Yan
    KNOWLEDGE-BASED SYSTEMS, 2025, 316
  • [2] Enhancing the Transferability of Adversarial Examples with Feature Transformation
    Xu, Hao-Qi
    Hu, Cong
    Yin, He-Feng
    MATHEMATICS, 2022, 10 (16)
  • [3] Enhancing Transferability of Adversarial Examples with Spatial Momentum
    Wang, Guoqiu
    Yan, Huanqian
    Wei, Xingxing
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, PRCV 2022, 2022, 13534 : 593 - 604
  • [4] Enhancing the transferability of adversarial examples on vision transformers
    Guan, Yujiao
    Yang, Haoyu
    Qu, Xiaotong
    Wang, Xiaodong
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [5] Enhancing Transferability of Adversarial Examples by Successively Attacking Multiple Models
    Zhang, Xiaolin
    Zhang, Wenwen
    Liu, Lixin
    Wang, Yongping
    Gao, Lu
    Zhang, Shuai
    International Journal of Network Security, 2023, 25 (02) : 306 - 316
  • [6] Improving the transferability of adversarial examples through neighborhood attribution
    Ke, Wuping
    Zheng, Desheng
    Li, Xiaoyu
    He, Yuanhang
    Li, Tianyu
    Min, Fan
    KNOWLEDGE-BASED SYSTEMS, 2024, 296
  • [7] FDT: Improving the transferability of adversarial examples with frequency domain transformation
    Ling, Jie
    Chen, Jinhui
    Li, Honglei
    COMPUTERS & SECURITY, 2024, 144
  • [8] Ranking the Transferability of Adversarial Examples
    Levy, Moshe
    Amit, Guy
    Elovici, Yuval
    Mirsky, Yisroel
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (05)
  • [9] Enhancing transferability of adversarial examples via rotation-invariant attacks
    Duan, Yexin
    Zou, Junhua
    Zhou, Xingyu
    Zhang, Wu
    Zhang, Jin
    Pan, Zhisong
    IET COMPUTER VISION, 2022, 16 (01) : 1 - 11
  • [10] Enhancing the Transferability of Adversarial Examples Based on Nesterov Momentum for Recommendation Systems
    Qian, Fulan
    Yuan, Bei
    Chen, Hai
    Chen, Jie
    Lian, Defu
    Zhao, Shu
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (05) : 1276 - 1287