Scale-Aware Spatio-Temporal Relation Learning for Video Anomaly Detection

被引:15
|
作者
Li, Guoqiu [1 ]
Cai, Guanxiong [2 ]
Zeng, Xingyu [2 ]
Zhao, Rui [2 ,3 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] SenseTime Res, Shanghai, Peoples R China
[3] Shanghai Jiao Tong Univ, Qing Yuan Res Inst, Shanghai, Peoples R China
来源
关键词
Scale-aware; Weakly-supervised video anomaly detection; Spatio-temporal relation modeling;
D O I
10.1007/978-3-031-19772-7_20
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent progress in video anomaly detection (VAD) has shown that feature discrimination is the key to effectively distinguishing anomalies from normal events. We observe that many anomalous events occur in limited local regions, and the severe background noise increases the difficulty of feature learning. In this paper, we propose a scale-aware weakly supervised learning approach to capture local and salient anomalous patterns from the background, using only coarse video-level labels as supervision. We achieve this by segmenting frames into non-overlapping patches and then capturing inconsistencies among different regions through our patch spatial relation (PSR) module, which consists of self-attention mechanisms and dilated convolutions. To address the scale variation of anomalies and enhance the robustness of our method, a multi-scale patch aggregation method is further introduced to enable local-to-global spatial perception by merging features of patches with different scales. Considering the importance of temporal cues, we extend the relation modeling from the spatial domain to the spatio-temporal domain with the help of the existing video temporal relation network to effectively encode the spatio-temporal dynamics in the video. Experimental results show that our proposed method achieves new state-of-the-art performance on UCF-Crime and ShanghaiTech benchmarks. Code are available at https://github.com/nutuniv/SSRL.
引用
收藏
页码:333 / 350
页数:18
相关论文
共 50 条
  • [41] Enhancing Video Anomaly Detection Using Spatio-Temporal Autoencoders and Convolutional LSTM Networks
    Almahadin G.
    Subburaj M.
    Hiari M.
    Sathasivam Singaram S.
    Kolla B.P.
    Dadheech P.
    Vibhute A.D.
    Sengan S.
    SN Computer Science, 5 (1)
  • [42] Anomaly detection based on joint spatio-temporal learning for building electricity consumption
    Kong, Jun
    Jiang, Wen
    Tian, Qing
    Jiang, Min
    Liu, Tianshan
    APPLIED ENERGY, 2023, 334
  • [43] STGLR: A Spacecraft Anomaly Detection Method Based on Spatio-Temporal Graph Learning
    Lai, Yi
    Zhu, Ye
    Li, Li
    Lan, Qing
    Zuo, Yizheng
    SENSORS, 2025, 25 (02)
  • [44] Unsupervised Anomaly Detection in Multivariate Spatio-Temporal Datasets Using Deep Learning
    Karadayi, Yildiz
    ADVANCED ANALYTICS AND LEARNING ON TEMPORAL DATA, AALTD 2019, 2020, 11986 : 167 - 182
  • [45] Adversarial Spatio-Temporal Learning for Video Deblurring
    Zhang, Kaihao
    Luo, Wenhan
    Zhong, Yiran
    Ma, Lin
    Liu, Wei
    Li, Hongdong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (01) : 291 - 301
  • [46] Spatio-temporal Matching for Human Detection in Video
    Zhou, Feng
    De la Torre, Fernando
    COMPUTER VISION - ECCV 2014, PT VI, 2014, 8694 : 62 - 77
  • [47] Spatio-temporal detection of video moving object
    Ren, Ming-Yi
    Li, Xiao-Feng
    Li, Zai-Ming
    Guangdianzi Jiguang/Journal of Optoelectronics Laser, 2009, 20 (07): : 911 - 915
  • [48] Interpretable Stock Anomaly Detection Based on Spatio-Temporal Relation Networks With Genetic Algorithm
    Cheong, Mei-See
    Wu, Mei-Chen
    Huang, Szu-Hao
    IEEE ACCESS, 2021, 9 : 68302 - 68319
  • [49] Saliency-Aware Spatio-Temporal Artifact Detection for Compressed Video Quality Assessment
    Lin, Liqun
    Zheng, Yang
    Chen, Weiling
    Lan, Chengdong
    Zhao, Tiesong
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 693 - 697
  • [50] Event-driven Video Deblurring via Spatio-Temporal Relation-Aware Network
    Cao, Chengzhi
    Fu, Xueyang
    Zhu, Yurui
    Shi, Gege
    Zha, Zheng-Jun
    PROCEEDINGS OF THE THIRTY-FIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2022, 2022, : 799 - 805