MAEDefense: An Effective Masked AutoEncoder Defense against Adversarial Attacks

被引:0
|
作者
Lyu, Wanli [1 ]
Wu, Mengjiang [1 ]
Yin, Zhaoxia [2 ]
Luo, Bin [1 ]
机构
[1] Anhui Univ, Anhui Prov Key Lab Multimodal Cognit Computat, Hefei, Peoples R China
[2] East China Normal Univ, Shanghai Key Lab Multidimens Informat Proc, Shanghai 200241, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/APSIPAASC58517.2023.10317132
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies have demonstrated that deep neural networks (DNNs) are vulnerable to attacks when adversarial perturbations are added to the clean samples. Reconstructing clean samples under the premise of inputting adversarial perturbations is a challenging task. To address this issue, this paper proposes a Mask AutoEncoder Defense (MAEDefense) framework to counter adversarial attacks. Firstly, the adversarial sample is divided into two complementary masked images. Secondly, the two masked images with carefully crafted adversarial noise locations are reassigned to non-adversarial noise locations. Finally, the two reconstructed images are pixel-wise fused (weighted average) to obtain a "clean image". The proposed method requires no external training and is easy to implement. Experimental results show that the proposed method significantly defends against white-box attacks and black-box transferable attacks compared with state-of-the-art methods.
引用
收藏
页码:1915 / 1922
页数:8
相关论文
共 50 条
  • [1] Defense against Adversarial Attacks on Image Recognition Systems Using an Autoencoder
    Platonov, V. V.
    Grigorjeva, N. M.
    AUTOMATIC CONTROL AND COMPUTER SCIENCES, 2023, 57 (08) : 989 - 995
  • [2] Defense against Adversarial Attacks on Image Recognition Systems Using an Autoencoder
    V. V. Platonov
    N. M. Grigorjeva
    Automatic Control and Computer Sciences, 2023, 57 : 989 - 995
  • [3] DDSA: A Defense Against Adversarial Attacks Using Deep Denoising Sparse Autoencoder
    Bakhti, Yassine
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforges, Olivier
    IEEE ACCESS, 2019, 7 : 160397 - 160407
  • [4] Deblurring as a Defense against Adversarial Attacks
    Duckworth, William, III
    Liao, Weixian
    Yu, Wei
    2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET, 2023, : 61 - 67
  • [5] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350
  • [6] Defense against Adversarial Attacks with an Induced Class
    Xu, Zhi
    Wang, Jun
    Pu, Jian
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [7] On the Defense of Spoofing Countermeasures Against Adversarial Attacks
    Nguyen-Vu, Long
    Doan, Thien-Phuc
    Bui, Mai
    Hong, Kihun
    Jung, Souhwan
    IEEE ACCESS, 2023, 11 : 94563 - 94574
  • [8] A Defense Method Against Facial Adversarial Attacks
    Sadu, Chiranjeevi
    Das, Pradip K.
    2021 IEEE REGION 10 CONFERENCE (TENCON 2021), 2021, : 459 - 463
  • [9] Binary thresholding defense against adversarial attacks
    Wang, Yutong
    Zhang, Wenwen
    Shen, Tianyu
    Yu, Hui
    Wang, Fei-Yue
    NEUROCOMPUTING, 2021, 445 : 61 - 71
  • [10] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    APPLIED SCIENCES-BASEL, 2019, 9 (01):