Human action recognition based on point context tensor shape descriptor

被引:1
|
作者
Li, Jianjun [1 ,2 ]
Mao, Xia [1 ]
Chen, Lijiang [1 ]
Wang, Lan [1 ]
机构
[1] Beihang Univ, Sch Elect & Informat Engn, Beijing, Peoples R China
[2] Inner Mongolia Univ Sci & Technol, Sch Elect & Informat Engn, Baotou, Peoples R China
基金
中国国家自然科学基金;
关键词
action recognition; tensor mode; dynamic time warping; tensor shape descriptor; view-invariant;
D O I
10.1117/1.JEI.26.4.043024
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Motion trajectory recognition is one of the most important means to determine the identity of a moving object. A compact and discriminative feature representation method can improve the trajectory recognition accuracy. This paper presents an efficient framework for action recognition using a three-dimensional skeleton kinematic joint model. First, we put forward a rotation-scale-translation-invariant shape descriptor based on point context (PC) and the normal vector of hypersurface to jointly characterize local motion and shape information. Meanwhile, an algorithm for extracting the key trajectory based on the confidence coefficient is proposed to reduce the randomness and computational complexity. Second, to decrease the eigenvalue decomposition time complexity, a tensor shape descriptor (TSD) based on PC that can globally capture the spatial layout and temporal order to preserve the spatial information of each frame is proposed. Then, a multilinear projection process is achieved by tensor dynamic time warping to map the TSD to a low-dimensional tensor subspace of the same size. Experimental results show that the proposed shape descriptor is effective and feasible, and the proposed approach obtains considerable performance improvement over the state-of-the-art approaches with respect to accuracy on a public action dataset. (C) 2017 SPIE and IS&T
引用
收藏
页数:10
相关论文
共 50 条
  • [11] Human Activity Recognition Using the 4D Spatiotemporal Shape Context Descriptor
    Kholgade, Natasha
    Savakis, Andreas
    ADVANCES IN VISUAL COMPUTING, PT 2, PROCEEDINGS, 2009, 5876 : 357 - 366
  • [12] 3D gesture trajectory recognition based on point context descriptor
    Mao X.
    Li C.
    Wu X.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2016, 44 (08): : 52 - 57
  • [13] Hierarchical registration of unordered TLS point clouds based on binary shape context descriptor
    Dong, Zhen
    Yang, Bisheng
    Liang, Fuxun
    Huang, Ronggang
    Scherer, Sebastian
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2018, 144 : 61 - 79
  • [14] Part-based motion descriptor image for human action recognition
    Tran, K. N.
    Kakadiaris, I. A.
    Shah, S. K.
    PATTERN RECOGNITION, 2012, 45 (07) : 2562 - 2572
  • [15] Automatic Video Descriptor for Human Action Recognition
    Perera, Minoli
    Farook, Cassim
    Madurapperuma, A. P.
    2017 NATIONAL INFORMATION TECHNOLOGY CONFERENCE (NITC), 2017, : 61 - 66
  • [16] Human Action Recognition With Trajectory Based Covariance Descriptor In Unconstrained Videos
    Wang, Hanli
    Yi, Yun
    Wu, Jun
    MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 1175 - 1178
  • [17] Human Action Recognition Using Bone Pair Descriptor and Distance Descriptor
    Warchol, Dawid
    Kapuscinski, Tomasz
    SYMMETRY-BASEL, 2020, 12 (10):
  • [18] Distribution of action movements (DAM): a descriptor for human action recognition
    Franco Ronchetti
    Facundo Quiroga
    Laura Lanzarini
    Cesar Estrebou
    Frontiers of Computer Science, 2015, 9 : 956 - 965
  • [19] Distribution of action movements (DAM): a descriptor for human action recognition
    Ronchetti, Franco
    Quiroga, Facundo
    Lanzarini, Laura
    Estrebou, Cesar
    FRONTIERS OF COMPUTER SCIENCE, 2015, 9 (06) : 956 - 965
  • [20] Distribution of action movements(DAM):a descriptor for human action recognition
    Franco RONCHETTI
    Facundo QUIROGA
    Laura LANZARINI
    Cesar ESTREBOU
    Frontiers of Computer Science, 2015, 9 (06) : 956 - 965