Defending Video Recognition Model Against Adversarial Perturbations via Defense Patterns

被引:0
|
作者
Lee, Hong Joo [1 ]
Ro, Yong Man [1 ]
机构
[1] Korea Adv Inst Sci & Technol KAIST, Sch Elect Engn, Image & Video Syst Lab, Daejeon 34141, South Korea
关键词
Computational modeling; Perturbation methods; Adaptation models; Training; Analytical models; Predictive models; Pattern recognition; Defense patterns (DPs); robust video recognition; video adversarial defense; ROBUSTNESS; ENSEMBLE;
D O I
10.1109/TDSC.2023.3346064
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Neural Networks (DNNs) have been widely successful in various domains, but they are vulnerable to adversarial attacks. Recent studies have also demonstrated that video recognition models are susceptible to adversarial perturbations, but the existing defense strategies in the image domain do not transfer well to the video domain due to the lack of considering temporal development and require a high computational cost for training video recognition models. This article, first, investigates the temporal vulnerability of video recognition models by quantifying the effect of temporal perturbations on the model's performance. Based on these investigations, we propose Defense Patterns (DPs) that can effectively protect video recognition models by adding them to the input video frames. The DPs are generated on top of a pre-trained model, eliminating the need for retraining or fine-tuning, which significantly reduces the computational cost. Experimental results on two benchmark datasets and various action recognition models demonstrate the effectiveness of the proposed method in enhancing the robustness of video recognition models.
引用
收藏
页码:4110 / 4121
页数:12
相关论文
共 50 条
  • [41] Towards Interpretable Defense Against Adversarial Attacks via Causal Inference
    Min Ren
    Yun-Long Wang
    Zhao-Feng He
    Machine Intelligence Research, 2022, 19 : 209 - 226
  • [42] RADAP: A Robust and Adaptive Defense Against Diverse Adversarial Patches on face recognition
    Liu, Xiaoliang
    Shen, Furao
    Zhao, Jian
    Nie, Changhai
    PATTERN RECOGNITION, 2025, 157
  • [43] Black-box Adversarial Attack Against Road Sign Recognition Model via PSO
    Chen J.-Y.
    Chen Z.-Q.
    Zheng H.-B.
    Shen S.-J.
    Su M.-M.
    Ruan Jian Xue Bao/Journal of Software, 2020, 31 (09): : 2785 - 2801
  • [44] MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
    Jia, Jinyuan
    Salem, Ahmed
    Backes, Michael
    Zhang, Yang
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 259 - 274
  • [45] Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser
    Joshi, Sonal
    Kataria, Saurabh
    Shao, Yiwen
    Zelasko, Piotr
    Villalba, Jesus
    Khudanpur, Sanjeev
    Dehak, Najim
    INTERSPEECH 2022, 2022, : 5035 - 5039
  • [46] Over-the-Air Adversarial Flickering Attacks against Video Recognition Networks
    Pony, Roi
    Naeh, Itay
    Mannor, Shie
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 515 - 524
  • [47] Deep Image Restoration Model: A Defense Method Against Adversarial Attacks
    Ali, Kazim
    Quershi, Adnan N.
    Bin Arifin, Ahmad Alauddin
    Bhatti, Muhammad Shahid
    Sohail, Abid
    Hassan, Rohail
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 71 (02): : 2209 - 2224
  • [48] Defense Strategies Against Adversarial Jamming Attacks via Deep Reinforcement Learning
    Wang, Feng
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    2020 54TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2020, : 336 - 341
  • [49] No-Box Universal Adversarial Perturbations Against Image Classifiers via Artificial Textures
    Mou, Ningping
    Guo, Binqing
    Zhao, Lingchen
    Wang, Cong
    Zhao, Yue
    Wang, Qian
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 9803 - 9818
  • [50] One Parameter Defense-Defending Against Data Inference Attacks via Differential Privacy
    Ye, Dayong
    Shen, Sheng
    Zhu, Tianqing
    Liu, Bo
    Zhou, Wanlei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1466 - 1480