Exploiting spatial-temporal context for trajectory based action video retrieval

被引:0
|
作者
Lelin Zhang
Zhiyong Wang
Tingting Yao
Shin’ichi Staoh
Tao Mei
David Dagan Feng
机构
[1] The University of Sydney,School of Information Technologies
[2] Hefei University of Technology,School of Computer and Information
[3] National Institute of Informatics,undefined
[4] Microsoft Research,undefined
来源
关键词
Spatial-temporal information; Descriptor coding; Trajectory matching; Bag-of-visual-words; Action video retrieval;
D O I
暂无
中图分类号
学科分类号
摘要
Retrieving videos with similar actions is an important task with many applications. Yet it is very challenging due to large variations across different videos. While the state-of-the-art approaches generally utilize the bag-of-visual-words representation with the dense trajectory feature, the spatial-temporal context among trajectories is overlooked. In this paper, we propose to incorporate such information into the descriptor coding and trajectory matching stages of the retrieval pipeline. Specifically, to capture the spatial-temporal correlations among trajectories, we develop a descriptor coding method based on the correlation between spatial-temporal and feature aspects of individual trajectories. To deal with the mis-alignments between dense trajectory segments, we develop an offset-aware distance measure for improved trajectory matching. Our comprehensive experimental results on two popular datasets indicate that the proposed method improves the performance of action video retrieval, especially on more dynamic actions with significant movements and cluttered backgrounds.
引用
收藏
页码:2057 / 2081
页数:24
相关论文
共 50 条
  • [1] Exploiting spatial-temporal context for trajectory based action video retrieval
    Zhang, Lelin
    Wang, Zhiyong
    Yao, Tingting
    Staoh, Shin'ichi
    Mei, Tao
    Feng, David Dagan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (02) : 2057 - 2081
  • [2] Spatial-Temporal Correlation for Trajectory based Action Video Retrieval
    Shen, Xi
    Zhang, Lelin
    Wang, Zhiyong
    Feng, Dagan
    2015 IEEE 17TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2015,
  • [3] Exploiting Spatial-Temporal Context for Interacting Hand Reconstruction on Monocular RGB Video
    Zhao, Weichao
    Hu, Hezhen
    Zhou, Wengang
    Li, Li
    Li, Houqiang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (06)
  • [4] Optimum Video Subset and Spatial-Temporal Video Retrieval
    Wang M.-Z.
    Liu X.-J.
    Sun K.-X.
    Wang Z.-R.
    Jisuanji Xuebao/Chinese Journal of Computers, 2019, 42 (09): : 2004 - 2023
  • [5] Exploiting Spatial-temporal Correlations for Video Anomaly Detection
    Zhao, Mengyang
    Liu, Yang
    Liu, Jing
    Zeng, Xinhua
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 1727 - 1733
  • [6] Motion Trajectory based Spatial-Temporal Degradation Measurement for Video Quality Assessment
    Wu, Jinjian
    Liu, Yongxu
    Shi, Guangming
    2018 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (IEEE VCIP), 2018,
  • [7] Analysis of video trajectory based on spatial-temporal extension to locally linear embedding
    Fu M.-S.
    Luo B.
    Kong M.
    Qin J.-P.
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2011, 39 (05): : 97 - 101
  • [8] Video abstract system based on spatial-temporal neighborhood trajectory analysis algorithm
    Huang, Han
    Fu, Shen
    Cai, Zhao-Quan
    Li, Bin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (09) : 11321 - 11338
  • [9] Video abstract system based on spatial-temporal neighborhood trajectory analysis algorithm
    Han Huang
    Shen Fu
    Zhao-Quan Cai
    Bin Li
    Multimedia Tools and Applications, 2018, 77 : 11321 - 11338
  • [10] Spatial-Temporal Separable Attention for Video Action Recognition
    Guo, Xi
    Hu, Yikun
    Chen, Fang
    Jin, Yuhui
    Qiao, Jian
    Huang, Jian
    Yang, Qin
    2022 INTERNATIONAL CONFERENCE ON FRONTIERS OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING, FAIML, 2022, : 224 - 228