TransVOD: End-to-End Video Object Detection With Spatial-Temporal Transformers

被引:57
|
作者
Zhou, Qianyu [1 ]
Li, Xiangtai [2 ]
He, Lu [1 ]
Yang, Yibo [3 ]
Cheng, Guangliang [4 ]
Tong, Yunhai [2 ]
Ma, Lizhuang [1 ]
Tao, Dacheng [3 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China
[2] Peking Univ, Sch Artificial Intelligence, Beijing 100871, Peoples R China
[3] JD Explore Acad, Beijing 100176, Peoples R China
[4] SenseTime Res, Beijing 100080, Peoples R China
基金
中国国家自然科学基金;
关键词
Transformers; Object detection; Pipelines; Detectors; Streaming media; Fuses; Task analysis; Video object detection; vision transformers; scene understanding; video understanding;
D O I
10.1109/TPAMI.2022.3223955
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on simple yet effective spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of current VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need postprocessing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (3 %-4 % mAP) on the ImageNet VID dataset. TransVOD yields comparable performances on the benchmark of ImageNet VID. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0 % mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7 % mAP while running at around 30 FPS on a single V100 GPU device. Code and models are available at https://github.com/SJTU-LuHe/TransVOD.
引用
收藏
页码:7853 / 7869
页数:17
相关论文
共 50 条
  • [21] End-to-End Object Detection with YOLOF
    Xi, Xing
    Huang, Yangyang
    Wu, Weiye
    Luo, Ronghua
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VII, ICIC 2024, 2024, 14868 : 101 - 112
  • [22] End-to-End Video Gaze Estimation via Capturing Head-Face-Eye Spatial-Temporal Interaction Context
    Guan, Yiran
    Chen, Zhuoguang
    Zeng, Wenzheng
    Cao, Zhiguo
    Xiao, Yang
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1687 - 1691
  • [23] Video Object Detection with an Aligned Spatial-Temporal Memory
    Xiao, Fanyi
    Lee, Yong Jae
    COMPUTER VISION - ECCV 2018, PT VIII, 2018, 11212 : 494 - 510
  • [24] MPNET: An End-to-End Deep Neural Network for Object Detection in Surveillance Video
    Wang, Hanyu
    Wang, Ping
    Qian, Xueming
    IEEE ACCESS, 2018, 6 : 30296 - 30308
  • [25] SWINBERT: End-to-End Transformers with Sparse Attention for Video Captioning
    Lin, Kevin
    Li, Linjie
    Lin, Chung-Ching
    Ahmed, Faisal
    Gan, Zhe
    Liu, Zicheng
    Lu, Yumao
    Wang, Lijuan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 17928 - 17937
  • [26] VPDETR: End-to-End Vanishing Point DEtection TRansformers
    Chen, Taiyan
    Ying, Xianghua
    Yang, Jinfa
    Wang, Ruibin
    Guo, Ruohao
    Xing, Bowei
    Shi, Ji
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 1192 - 1200
  • [27] L-DETR: A Light-Weight Detector for End-to-End Object Detection With Transformers
    Li, Tianyang
    Wang, Jian
    Zhang, Tibing
    IEEE ACCESS, 2022, 10 : 105685 - 105692
  • [28] Enhanced Sparse Detection for End-to-End Object Detection
    Liao, Yongwei
    Chen, Gang
    Xu, Runnan
    IEEE ACCESS, 2022, 10 : 85630 - 85640
  • [29] EOOD: End-to-end oriented object detection
    Zhang, Caiguang
    Chen, Zilong
    Xiong, Boli
    Ji, Kefeng
    Kuang, Gangyao
    NEUROCOMPUTING, 2025, 621
  • [30] Intrinsic Explainability for End-to-End Object Detection
    Fernandes, Luis
    Fernandes, Joao N. D.
    Calado, Mariana
    Pinto, Joao Ribeiro
    Cerqueira, Ricardo
    Cardoso, Jaime S.
    IEEE ACCESS, 2024, 12 : 2623 - 2634