Enhancing Transferability of Adversarial Examples Through Mixed-Frequency Inputs

被引:1
|
作者
Qian, Yaguan [1 ]
Chen, Kecheng [1 ]
Wang, Bin [2 ]
Gu, Zhaoquan [3 ]
Ji, Shouling [4 ]
Wang, Wei [5 ]
Zhang, Yanchun [6 ,7 ]
机构
[1] Zhejiang Univ Sci & Technol, Sch Big Data Sci, Hangzhou 310023, Peoples R China
[2] Zhejiang Key Lab Artificial Intelligence Things AI, Hangzhou 310053, Peoples R China
[3] Harbin Inst Technol Shenzhen, Sch Comp Sci & Technol, Shenzhen 518071, Peoples R China
[4] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310058, Peoples R China
[5] Xi An Jiao Tong Univ, Minist Educ Key Lab Intelligent Networks & Network, Xian 710049, Peoples R China
[6] Zhejiang Normal Univ, Sch Comp Sci & Technol, Jinhua 321004, Peoples R China
[7] Victoria Univ, Sch Comp Sci & Math, Melbourne, Vic 8001, Australia
基金
中国国家自然科学基金;
关键词
Frequency-domain analysis; Closed box; Noise; Glass box; Training; Optimization; Computational modeling; Security vulnerability; adversarial examples; transfer-based attack; Fourier transform;
D O I
10.1109/TIFS.2024.3430508
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recent studies have shown that Deep Neural Networks (DNNs) are easily deceived by adversarial examples, revealing their serious vulnerability. Due to the transferability, adversarial examples can attack across multiple models with different architectures, called transfer-based black-box attacks. Input transformation is one of the most effective methods to improve adversarial transferability. In particular, the attacks fusing other categories of image information reveal the potential direction of adversarial attacks. However, the current techniques rely on input transformations in the spatial domain, which ignore the frequency information of the image and limit its transferability. To tackle this issue, we propose Mixed-Frequency Inputs (MFI) based on a frequency domain perspective. MFI alleviates the overfitting of adversarial examples to the source model by considering high-frequency components from various kinds of images in the process of calculating the gradient. By accumulating these high-frequency components, MFI acquires a more steady gradient direction in each iteration, leading to the discovery of better local maxima and enhancing transferability. Extensive experimental results on the ImageNet-compatible datasets demonstrate that MFI outperforms existing transform-based attacks with a clear margin on both Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), which proves MFI is more suitable for realistic black-box scenarios.
引用
收藏
页码:7633 / 7645
页数:13
相关论文
共 50 条
  • [41] REGULARIZED INTERMEDIATE LAYERS ATTACK: ADVERSARIAL EXAMPLES WITH HIGH TRANSFERABILITY
    Li, Xiaorui
    Cui, Weiyu
    Huang, Jiawei
    Wang, Wenyi
    Chen, Jianwen
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1904 - 1908
  • [42] Boosting the Transferability of Video Adversarial Examples via Temporal Translation
    Wei, Zhipeng
    Chen, Jingjing
    Wu, Zuxuan
    Jiang, Yu-Gang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2659 - 2667
  • [43] Boosting the transferability of adversarial examples via stochastic serial attack
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Tang, Xue-song
    NEURAL NETWORKS, 2022, 150 : 58 - 67
  • [44] Improving transferability of adversarial examples by saliency distribution and data augmentation
    Dong, Yansong
    Tang, Long
    Tian, Cong
    Yu, Bin
    Duan, Zhenhua
    COMPUTERS & SECURITY, 2022, 120
  • [45] Assessing Transferability of Adversarial Examples against Malware Detection Classifiers
    Wang, Yixiang
    Liu, Jiqiang
    Chang, Xiaolin
    CF '19 - PROCEEDINGS OF THE 16TH ACM INTERNATIONAL CONFERENCE ON COMPUTING FRONTIERS, 2019, : 211 - 214
  • [46] A MIXED-FREQUENCY MODEL OF REGIONAL OUTPUT
    ISRAILEVICH, PR
    KUTTNER, KN
    JOURNAL OF REGIONAL SCIENCE, 1993, 33 (03) : 321 - 342
  • [47] Enhancing the transferability of adversarial samples with random noise techniques
    Huang, Jiahao
    Wen, Mi
    Wei, Minjie
    Bi, Yanbing
    COMPUTERS & SECURITY, 2024, 136
  • [48] Enhancing adversarial transferability with partial blocks on vision transformer
    Han, Yanyang
    Liu, Ju
    Liu, Xiaoxi
    Jiang, Xiao
    Gu, Lingchen
    Gao, Xuesong
    Chen, Weiqiang
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (22): : 20249 - 20262
  • [49] Enhancing the Transferability of Targeted Attacks with Adversarial Perturbation Transform
    Deng, Zhengjie
    Xiao, Wen
    Li, Xiyan
    He, Shuqian
    Wang, Yizhen
    ELECTRONICS, 2023, 12 (18)
  • [50] Enhancing the Transferability of Adversarial Patch via Alternating Minimization
    Wang, Yang
    Chen, Lei
    Yang, Zhen
    Cao, Tieyong
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)