TEST: Temporal-spatial separated transformer for temporal action localization

被引:0
|
作者
Wan, Herun [1 ,2 ,3 ]
Luo, Minnan [1 ,2 ,3 ]
Li, Zhihui [4 ]
Wang, Yang [5 ]
机构
[1] Xi An Jiao Tong Univ, Sch Comp Sci & Technol, Xian 710049, Peoples R China
[2] Xi An Jiao Tong Univ, Minist Educ, Key Lab Intelligent Networks & Network Secur, Xian 710049, Peoples R China
[3] Xi An Jiao Tong Univ, Shaanxi Prov Key Lab Big Data Knowledge Engn, Xian 710049, Peoples R China
[4] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Peoples R China
[5] Xi An Jiao Tong Univ, Sch Continuing Educ, Xian 710049, Peoples R China
基金
中国国家自然科学基金;
关键词
Video transformer; Temporal action localization; High efficiency; NETWORK;
D O I
10.1016/j.neucom.2024.128688
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Temporal action localization is a fundamental task in video understanding. Existing methods fall into three categories: anchor-based, actionness-guided, and anchor-free. Anchor-based and actionness-guided models need huge computation resources to process redundant proposals or enumerate every possible proposal. Anchor- free models with lighter parameters become a more attractive option as temporal actions become more complex. However, they typically struggle to achieve high performance due to the need to aggregate global temporal-spatial features at every time step. To overcome this limitation, we design three efficient transformer- based architectures, bringing two advantages: (i) the global receptive field of transformers enables models to aggregate spatial and temporal at each time step, and (ii) the transformers could capture the moment-level feature, enhancing localization performance. Our designed architectures are adapted to any framework, thus we propose a simple but effective anchor-free framework named TEST. Compared to strong baselines, TEST achieves 0.96% to 3.20% improvement on two real-world datasets. Meanwhile, it improves time efficiency by 1.36 times and space efficiency by 1.08 times. Further experiments prove the effectiveness of TEST's modules. Implementation of our work is available at https://github.com/whr000001/TeST.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Multi-granularity transformer fusion for temporal action localization
    Zhang M.
    Hu H.
    Li Z.
    Soft Computing, 2024, 28 (20) : 12377 - 12388
  • [22] TALLFormer: Temporal Action Localization with a Long-Memory Transformer
    Cheng, Feng
    Bertasius, Gedas
    COMPUTER VISION, ECCV 2022, PT XXXIV, 2022, 13694 : 503 - 521
  • [23] AUDITORY-VISUAL AND TEMPORAL-SPATIAL INTEGRATION AS DETERMINANTS OF TEST DIFFICULTY
    STERRITT, GM
    MARTIN, V
    RUDNICK, M
    PSYCHONOMIC SCIENCE, 1971, 23 (04): : 289 - 291
  • [24] Learning Temporal-Spatial Spectrum Reuse
    Zhang, Yi
    Tay, Wee Peng
    Li, Kwok Hung
    Esseghir, Moez
    Gaiti, Dominique
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2016, 64 (07) : 3092 - 3103
  • [25] Estimation of joint temporal-spatial semivariograms
    Buxton, BE
    Pate, AD
    GEOSTATISTICS WOLLONGONG '96, VOLS 1 AND 2, 1997, 8 (1-2): : 150 - 161
  • [26] Temporal-spatial optical information processing
    Ichioka, Y
    Konishi, T
    PHOTOREFRACTIVE FIBER AND CRYSTAL DEVICES: MATERIALS, OPTICAL PROPERTIES, AND APPLICATIONS III, 1997, 3137 : 222 - 227
  • [27] BLOCKWISE TEMPORAL-SPATIAL PATHWAY NETWORK
    Hong, SeulGi
    Choi, Min-Kook
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3677 - 3681
  • [28] Human Activity Classification and Localization Algorithm Based on Temporal-Spatial Virtual Array
    Okamoto, Yoshihisa
    Ohtsuki, Tomoaki
    2013 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2013, : 1512 - 1516
  • [29] Three-Branch Temporal-Spatial Convolutional Transformer for Motor Imagery EEG Classification
    Chen, Weiming
    Luo, Yiqing
    Wang, Jie
    IEEE ACCESS, 2024, 12 : 79754 - 79764
  • [30] Spatial-temporal graph transformer network for skeleton-based temporal action segmentation
    Tian, Xiaoyan
    Jin, Ye
    Zhang, Zhao
    Liu, Peng
    Tang, Xianglong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (15) : 44273 - 44297