Defending Video Recognition Model Against Adversarial Perturbations via Defense Patterns

被引:0
|
作者
Lee, Hong Joo [1 ]
Ro, Yong Man [1 ]
机构
[1] Korea Adv Inst Sci & Technol KAIST, Sch Elect Engn, Image & Video Syst Lab, Daejeon 34141, South Korea
关键词
Computational modeling; Perturbation methods; Adaptation models; Training; Analytical models; Predictive models; Pattern recognition; Defense patterns (DPs); robust video recognition; video adversarial defense; ROBUSTNESS; ENSEMBLE;
D O I
10.1109/TDSC.2023.3346064
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Neural Networks (DNNs) have been widely successful in various domains, but they are vulnerable to adversarial attacks. Recent studies have also demonstrated that video recognition models are susceptible to adversarial perturbations, but the existing defense strategies in the image domain do not transfer well to the video domain due to the lack of considering temporal development and require a high computational cost for training video recognition models. This article, first, investigates the temporal vulnerability of video recognition models by quantifying the effect of temporal perturbations on the model's performance. Based on these investigations, we propose Defense Patterns (DPs) that can effectively protect video recognition models by adding them to the input video frames. The DPs are generated on top of a pre-trained model, eliminating the need for retraining or fine-tuning, which significantly reduces the computational cost. Experimental results on two benchmark datasets and various action recognition models demonstrate the effectiveness of the proposed method in enhancing the robustness of video recognition models.
引用
收藏
页码:4110 / 4121
页数:12
相关论文
共 50 条
  • [31] Defense against Adversarial Attacks in Image Recognition Based on Multilayer Filters
    Wang, Mingde
    Liu, Zhijing
    APPLIED SCIENCES-BASEL, 2024, 14 (18):
  • [32] Defense against Adversarial Attacks on Image Recognition Systems Using an Autoencoder
    V. V. Platonov
    N. M. Grigorjeva
    Automatic Control and Computer Sciences, 2023, 57 : 989 - 995
  • [33] Generative adversarial defense via conditional diffusion model
    Shi, Xiaowen
    Zhou, Chao
    Wang, Yuan-Gen
    MULTIMEDIA SYSTEMS, 2025, 31 (01)
  • [34] Towards Defending Multiple ℓp-Norm Bounded Adversarial Perturbations via Gated Batch Normalization
    Liu, Aishan
    Tang, Shiyu
    Chen, Xinyun
    Huang, Lei
    Qin, Haotong
    Liu, Xianglong
    Tao, Dacheng
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (06) : 1881 - 1898
  • [35] Defending Fake via Warning: Universal Proactive Defense Against Face Manipulation
    Zhai, Rui
    Ni, Rongrong
    Chen, Yu
    Yu, Yang
    Zhao, Yao
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1072 - 1076
  • [36] DefenseVGAE: Defending Against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder
    Zhang, Ao
    Ma, Jinwen
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14865 : 313 - 324
  • [37] DEFENDING AGAINST UNIVERSAL ATTACK VIA CURVATURE-AWARE CATEGORY ADVERSARIAL TRAINING
    Du, Peilun
    Zheng, Xiaolong
    Liu, Liang
    Ma, Huadong
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2470 - 2474
  • [38] DefenseVGAE: Defending against adversarial attacks on graph data via a variational graph autoencoder
    Department of Information Science, School of Mathematical Sciences, Peking University, Beijing
    100871, China
    arXiv, 1600,
  • [39] Towards Interpretable Defense Against Adversarial Attacks via Causal Inference
    Min Ren
    Yun-Long Wang
    Zhao-Feng He
    Machine Intelligence Research, 2022, (03) : 209 - 226
  • [40] Towards Interpretable Defense Against Adversarial Attacks via Causal Inference
    Ren, Min
    Wang, Yun-Long
    He, Zhao-Feng
    MACHINE INTELLIGENCE RESEARCH, 2022, 19 (03) : 209 - 226