MAEDefense: An Effective Masked AutoEncoder Defense against Adversarial Attacks

被引:0
|
作者
Lyu, Wanli [1 ]
Wu, Mengjiang [1 ]
Yin, Zhaoxia [2 ]
Luo, Bin [1 ]
机构
[1] Anhui Univ, Anhui Prov Key Lab Multimodal Cognit Computat, Hefei, Peoples R China
[2] East China Normal Univ, Shanghai Key Lab Multidimens Informat Proc, Shanghai 200241, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/APSIPAASC58517.2023.10317132
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies have demonstrated that deep neural networks (DNNs) are vulnerable to attacks when adversarial perturbations are added to the clean samples. Reconstructing clean samples under the premise of inputting adversarial perturbations is a challenging task. To address this issue, this paper proposes a Mask AutoEncoder Defense (MAEDefense) framework to counter adversarial attacks. Firstly, the adversarial sample is divided into two complementary masked images. Secondly, the two masked images with carefully crafted adversarial noise locations are reassigned to non-adversarial noise locations. Finally, the two reconstructed images are pixel-wise fused (weighted average) to obtain a "clean image". The proposed method requires no external training and is easy to implement. Experimental results show that the proposed method significantly defends against white-box attacks and black-box transferable attacks compared with state-of-the-art methods.
引用
收藏
页码:1915 / 1922
页数:8
相关论文
共 50 条
  • [21] Defensive Bit Planes: Defense Against Adversarial Attacks
    Tripathi, Achyut Mani
    Behera, Swarup Ranjan
    Paul, Konark
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [22] Defense-VAE: A Fast and Accurate Defense Against Adversarial Attacks
    Li, Xiang
    Ji, Shihao
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 1168 : 191 - 207
  • [23] Detection defense against adversarial attacks with saliency map
    Ye, Dengpan
    Chen, Chuanxi
    Liu, Changrui
    Wang, Hao
    Jiang, Shunzhi
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (12) : 10193 - 10210
  • [24] Symmetry Defense Against CNN Adversarial Perturbation Attacks
    Lindqvist, Blerta
    INFORMATION SECURITY, ISC 2023, 2023, 14411 : 142 - 160
  • [25] Universal Inverse Perturbation Defense Against Adversarial Attacks
    Chen J.-Y.
    Wu C.-A.
    Zheng H.-B.
    Wang W.
    Wen H.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (10): : 2172 - 2187
  • [26] DEFENSE AGAINST ADVERSARIAL ATTACKS ON SPOOFING COUNTERMEASURES OF ASV
    Wu, Haibin
    Liu, Songxiang
    Meng, Helen
    Lee, Hung-yi
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6564 - 6568
  • [27] An Autoencoder Based Approach to Defend Against Adversarial Attacks for Autonomous Vehicles
    Gan, Houchao
    Liu, Chen
    2020 INTERNATIONAL CONFERENCE ON CONNECTED AND AUTONOMOUS DRIVING (METROCAD 2020), 2020, : 43 - 44
  • [28] DSCAE: a denoising sparse convolutional autoencoder defense against adversarial examples
    Hongwei Ye
    Xiaozhang Liu
    Chunlai Li
    Journal of Ambient Intelligence and Humanized Computing, 2022, 13 : 1419 - 1429
  • [29] DSCAE: a denoising sparse convolutional autoencoder defense against adversarial examples
    Ye, Hongwei
    Liu, Xiaozhang
    Li, Chunlai
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2022, 13 (03) : 1419 - 1429
  • [30] AGS: Attribution Guided Sharpening as a Defense Against Adversarial Attacks
    Tobia, Javier Perez
    Braun, Phillip
    Narayan, Apurva
    ADVANCES IN INTELLIGENT DATA ANALYSIS XX, IDA 2022, 2022, 13205 : 225 - 236