An Adversarial Contrastive Distillation Algorithm Based on Masked Auto-Encoder

被引:0
|
作者
Zhang, Dian [1 ]
Dong, Yun-Wei [2 ]
机构
[1] School of Computer Science, Northwestern Polytechnical University, Xi’an,710129, China
[2] School of Software, Northwestern Polytechnical University, Xi’an,710129, China
来源
关键词
Contrastive Learning - Deep neural networks - Generative adversarial networks - Image enhancement - Network coding - Personnel training;
D O I
10.11897/SP.J.1016.2024.02274
中图分类号
学科分类号
摘要
With the continuous development of artificial intelligence, neural networks have exhibited exceptional performance across various domains. However, the existence of adversarial samples poses a significant challenge to the application of neural networks in security-related fields. As research progresses, there is an increasing focus on the robustness of neural networks and their inherent performance. This paper aims to improve neural networks to enhance their adversarial robustness. Although adversarial training has shown great potential in improving adversarial robustness, it suffers from the drawback of long running times. This is primarily because it requires generating adversarial samples for the target model at each iteration step. To address the issues of time-consuming adversarial sample generation and lack of diversity in adversarial training, this paper proposes a contrastive distillation algorithm based on masked autoencoders (MAE) to enhance the adversarial robustness of neural networks. Due to the low information density in images, the loss of image pixels caused by masking can often be recovered using neural networks. Thus, masking-based methods are commonly employed to increase sample diversity and improve the feature learning capabilities of neural networks. Given that adversarial training methods often require considerable time to generate adversarial samples, this paper adopts masking methods to mitigate the time-consuming issue of continuously generating adversarial samples during adversarial training. Additionally, randomly occluding parts of the image can effectively enhance sample diversity, which helps create multi-view samples to address the problem of feature in contrastive learning. Firstly, to reduce the teacher model's reliance on global image features, the teacher model learns in an improved masked autoencoder how to infer the features of obscured blocks based on visible sub-blocks. This method allows the teacher model to focus on learning how to reconstruct global features from limited visible parts, thereby enhancing its deep feature learning ability. Then, to mitigate the impact of adversarial interference, this paper employs knowledge distillation and contrastive learning methods to enhance the target model's adversarial robustness. Knowledge distillation reduces the target model's dependence on global features by transfering the knowledge from the teacher model, while contrastive learning enhances the model's ability to recognize tine-grained information among images by leveraging the diverty of the generated multi-view samples. Finally, label information is utilized to adjust the classification head to ensure recognition accuracy. By fine-tuning the classification head with label information, the model can maintain high accuracy in recognizing dean samples while improving its robustness against adversarial attacks. Experimental results conducted on ResNet50 and WideResNet50 demonstrate an average improvement of 11.50% in adversarial accuracy on CIFAR-10 and an average improvement of 6.35% on CIFAR-100. These results validate the effectiveness of the proposed contrastive distillation algorithm based on masked autoencoders. The algorithm attenuates the impact of adversarial interference by generating adversarial samples only once, enhances sample diversity through random masking, and improves the neural network's adversarial robustness. © 2024 Science Press. All rights reserved.
引用
收藏
页码:2274 / 2288
相关论文
共 50 条
  • [41] Image enhancement algorithm with convolutional auto-encoder network
    Wang W.-L.
    Yang X.-H.
    Zhao Y.-W.
    Gao N.
    Lv C.
    Zhang Z.-J.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2019, 53 (09): : 1728 - 1740
  • [42] AEKD: Unsupervised auto-encoder knowledge distillation for industrial anomaly detection
    Wu, Qiangwei
    Li, Hui
    Tian, Chenyu
    Wen, Long
    Li, Xinyu
    JOURNAL OF MANUFACTURING SYSTEMS, 2024, 73 : 159 - 169
  • [43] Collaborative and adversarial deep transfer auto-encoder for intelligent fault diagnosis
    Ma, Yulin
    Yang, Jun
    Li, Lei
    NEUROCOMPUTING, 2022, 486 : 1 - 15
  • [44] Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
    Feng, Ji
    Cai, Qi-Zhi
    Zhou, Zhi-Hua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [45] Brain Lesion Synthesis via Progressive Adversarial Variational Auto-Encoder
    Huo, Jiayu
    Vakharia, Vejay
    Wu, Chengyuan
    Sharan, Ashwini
    Ko, Andrew
    Ourselin, Sebastien
    Sparks, Rachel
    SIMULATION AND SYNTHESIS IN MEDICAL IMAGING, SASHIMI 2022, 2022, 13570 : 101 - 111
  • [46] Differentially Private Adversarial Auto-Encoder to Protect Gender in Voice Biometrics
    Chouchane, Oubaida
    Panariello, Michele
    Zari, Oualid
    Kerenciler, Ismet
    Chihaoui, Imen
    Todisco, Massimiliano
    Onen, Melek
    PROCEEDINGS OF THE 2023 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2023, 2023, : 127 - 132
  • [47] Stacked auto-encoder networks face alignment algorithm based on SURF features
    Cui Kai
    Cai Hua
    Liu Guang-wen
    Liu Zhi
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2018, 33 (03) : 254 - 260
  • [48] Regularized Masked Auto-Encoder for Semi-Supervised Hyperspectral Image Classification
    Wang, Liguo
    Wang, Heng
    Wang, Peng
    Wang, Lifeng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [49] Tire Pattern Image Classification using Variational Auto-Encoder with Contrastive Learning
    Yang, Jianning
    Xue, Jiahao
    Feng, Xiaodong
    Song, Chaoqi
    Hao, Yu
    2022 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2022,
  • [50] Online deep learning based on auto-encoder
    Zhang, Si-si
    Liu, Jian-wei
    Zuo, Xin
    Lu, Run-kun
    Lian, Si-ming
    APPLIED INTELLIGENCE, 2021, 51 (08) : 5420 - 5439