Enhancing Transferability of Adversarial Examples Through Mixed-Frequency Inputs

被引:1
|
作者
Qian, Yaguan [1 ]
Chen, Kecheng [1 ]
Wang, Bin [2 ]
Gu, Zhaoquan [3 ]
Ji, Shouling [4 ]
Wang, Wei [5 ]
Zhang, Yanchun [6 ,7 ]
机构
[1] Zhejiang Univ Sci & Technol, Sch Big Data Sci, Hangzhou 310023, Peoples R China
[2] Zhejiang Key Lab Artificial Intelligence Things AI, Hangzhou 310053, Peoples R China
[3] Harbin Inst Technol Shenzhen, Sch Comp Sci & Technol, Shenzhen 518071, Peoples R China
[4] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310058, Peoples R China
[5] Xi An Jiao Tong Univ, Minist Educ Key Lab Intelligent Networks & Network, Xian 710049, Peoples R China
[6] Zhejiang Normal Univ, Sch Comp Sci & Technol, Jinhua 321004, Peoples R China
[7] Victoria Univ, Sch Comp Sci & Math, Melbourne, Vic 8001, Australia
基金
中国国家自然科学基金;
关键词
Frequency-domain analysis; Closed box; Noise; Glass box; Training; Optimization; Computational modeling; Security vulnerability; adversarial examples; transfer-based attack; Fourier transform;
D O I
10.1109/TIFS.2024.3430508
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recent studies have shown that Deep Neural Networks (DNNs) are easily deceived by adversarial examples, revealing their serious vulnerability. Due to the transferability, adversarial examples can attack across multiple models with different architectures, called transfer-based black-box attacks. Input transformation is one of the most effective methods to improve adversarial transferability. In particular, the attacks fusing other categories of image information reveal the potential direction of adversarial attacks. However, the current techniques rely on input transformations in the spatial domain, which ignore the frequency information of the image and limit its transferability. To tackle this issue, we propose Mixed-Frequency Inputs (MFI) based on a frequency domain perspective. MFI alleviates the overfitting of adversarial examples to the source model by considering high-frequency components from various kinds of images in the process of calculating the gradient. By accumulating these high-frequency components, MFI acquires a more steady gradient direction in each iteration, leading to the discovery of better local maxima and enhancing transferability. Extensive experimental results on the ImageNet-compatible datasets demonstrate that MFI outperforms existing transform-based attacks with a clear margin on both Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), which proves MFI is more suitable for realistic black-box scenarios.
引用
收藏
页码:7633 / 7645
页数:13
相关论文
共 50 条
  • [31] Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input
    Byun, Junyoung
    Cho, Seungju
    Kwon, Myung-Joon
    Kim, Hee-Seon
    Kim, Changick
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15223 - 15232
  • [32] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer
    Ge, Zhijin
    Shang, Fanhua
    Liu, Hongying
    Liu, Yuanyuan
    Wan, Liang
    Feng, Wei
    Wang, Xiaosen
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4440 - 4449
  • [33] Gradient Aggregation Boosting Adversarial Examples Transferability Method
    Deng, Shiyun
    Ling, Jie
    Computer Engineering and Applications, 2024, 60 (14) : 275 - 282
  • [34] The measurement of mixed-frequency signal
    Shen, ZR
    Qin, HQ
    Sheng, JN
    ICEMI '97 - CONFERENCE PROCEEDINGS: THIRD INTERNATIONAL CONFERENCE ON ELECTRONIC MEASUREMENT & INSTRUMENTS, 1997, : 249 - 252
  • [35] Improving the transferability of adversarial examples via direction tuning
    Yang, Xiangyuan
    Lin, Jie
    Zhang, Hanlin
    Yang, Xinyu
    Zhao, Peng
    INFORMATION SCIENCES, 2023, 647
  • [36] Enhancing the Transferability of Adversarial Point Clouds by Initializing Transferable Adversarial Noise
    Chen, Hai
    Zhao, Shu
    Yan, Yuanting
    Qian, Fulan
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 201 - 205
  • [37] Improving the transferability of adversarial examples with separable positive and negative disturbances
    Yan, Yuanjie
    Bu, Yuxuan
    Shen, Furao
    Zhao, Jian
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (07): : 3725 - 3736
  • [38] DeT: Defending Against Adversarial Examples via Decreasing Transferability
    Li, Changjiang
    Weng, Haiqin
    Ji, Shouling
    Dong, Jianfeng
    He, Qinming
    CYBERSPACE SAFETY AND SECURITY, PT I, 2020, 11982 : 307 - 322
  • [39] Quantile forecasting with mixed-frequency data
    Lima, Luiz Renato
    Meng, Fanning
    Godeiro, Lucas
    INTERNATIONAL JOURNAL OF FORECASTING, 2020, 36 (03) : 1149 - 1162
  • [40] Improving the transferability of adversarial examples with separable positive and negative disturbances
    Yuanjie Yan
    Yuxuan Bu
    Furao Shen
    Jian Zhao
    Neural Computing and Applications, 2024, 36 : 3725 - 3736