Towards Transferable Adversarial Attacks with Centralized Perturbation

被引:0
|
作者
Wu, Shangbo [1 ]
Tan, Yu-an [1 ]
Wang, Yajie [1 ]
Ma, Ruinan [1 ]
Ma, Wencong [2 ]
Li, Yuanzhang [2 ]
机构
[1] Beijing Inst Technol, Sch Cyberspace Sci & Technol, Beijing, Peoples R China
[2] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing, Peoples R China
来源
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6 | 2024年
基金
中国国家自然科学基金;
关键词
EXAMPLES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial transferability enables black-box attacks on unknown victim deep neural networks (DNNs), rendering attacks viable in real-world scenarios. Current transferable attacks create adversarial perturbation over the entire image, resulting in excessive noise that overfit the source model. Concentrating perturbation to dominant image regions that are model-agnostic is crucial to improving adversarial efficacy. However, limiting perturbation to local regions in the spatial domain proves inadequate in augmenting transferability. To this end, we propose a transferable adversarial attack with fine-grained perturbation optimization in the frequency domain, creating centralized perturbation. We devise a systematic pipeline to dynamically constrain perturbation optimization to dominant frequency coefficients. The constraint is optimized in parallel at each iteration, ensuring the directional alignment of perturbation optimization with model prediction. Our approach allows us to centralize perturbation towards sample-specific important frequency features, which are shared by DNNs, effectively mitigating source model overfitting. Experiments demonstrate that by dynamically centralizing perturbation on dominating frequency coefficients, crafted adversarial examples exhibit stronger transferability, and allowing them to bypass various defenses.
引用
收藏
页码:6109 / 6116
页数:8
相关论文
共 50 条
  • [41] Cross-Modal Transferable Adversarial Attacks from Images to Videos
    Wei, Zhipeng
    Chen, Jingjing
    Wu, Zuxuan
    Jiang, Yu-Gang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15044 - 15053
  • [42] Enhancing the Transferability of Targeted Attacks with Adversarial Perturbation Transform
    Deng, Zhengjie
    Xiao, Wen
    Li, Xiyan
    He, Shuqian
    Wang, Yizhen
    ELECTRONICS, 2023, 12 (18)
  • [43] ADVERSARIAL PERTURBATION ATTACKS ON NESTED DICHOTOMIES CLASSIFICATION SYSTEMS
    Alkhouri, Ismail R.
    Velasquez, Alvaro
    Atia, George K.
    2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [44] APMSA: Adversarial Perturbation Against Model Stealing Attacks
    Zhang, Jiliang
    Peng, Shuang
    Gao, Yansong
    Zhang, Zhi
    Hong, Qinghui
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1667 - 1679
  • [45] Symmetry Defense Against CNN Adversarial Perturbation Attacks
    Lindqvist, Blerta
    INFORMATION SECURITY, ISC 2023, 2023, 14411 : 142 - 160
  • [46] Adversarial Perturbation Attacks on GLRT-based Detectors
    Alkhouri, Ismail
    Atia, George
    Mikhael, Wasfy
    2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2020,
  • [47] Universal Inverse Perturbation Defense Against Adversarial Attacks
    Chen J.-Y.
    Wu C.-A.
    Zheng H.-B.
    Wang W.
    Wen H.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (10): : 2172 - 2187
  • [48] Towards Adversarial Learning: From Evasion Attacks to Poisoning Attacks
    Wang, Wentao
    Xu, Han
    Wan, Yuxuan
    Ren, Jie
    Tang, Jiliang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 4830 - 4831
  • [49] Mape: defending against transferable adversarial attacks using multi-source adversarial perturbations elimination
    Liu, Xinlei
    Xie, Jichao
    Hu, Tao
    Yi, Peng
    Hu, Yuxiang
    Huo, Shumin
    Zhang, Zhen
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (02)
  • [50] Union label smoothing adversarial training: Recognize small perturbation attacks and reject larger perturbation attacks balanced
    Huang, Jinshu
    Xie, Haidong
    Wu, Chunlin
    Xiang, Xueshuang
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 148 : 600 - 609