Accurate 3D action recognition using learning on the Grassmann manifold

被引:149
|
作者
Slama, Rim [1 ,2 ]
Wannous, Hazem [1 ,2 ]
Daoudi, Mohamed [2 ,3 ]
Srivastava, Anuj [4 ]
机构
[1] Univ Lille 1, F-59655 Villeneuve Dascq, France
[2] CNRS, UMR 8022, LIFL Lab, Villeneuve Dascq, France
[3] Inst Mines Telecom Telecom Lille, Villeneuve Dascq, France
[4] Florida State Univ, Dept Stat, Tallahassee, FL 32306 USA
基金
美国国家科学基金会;
关键词
Human action recognition; Grassmann manifold; Observational latency; Depth images; Skeleton; Classification; SPARSE REPRESENTATION; VIDEO; ALGORITHMS;
D O I
10.1016/j.patcog.2014.08.011
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we address the problem of modeling and analyzing human motion by focusing on 3D body skeletons. Particularly, our intent is to represent skeletal motion in a geometric and efficient way, leading to an accurate action-recognition system. Here an action is represented by a dynamical system whose observability matrix is characterized as an element of a Grassmann manifold. To formulate our learning algorithm, we propose two distinct ideas: (1) in the first one we perform classification using a Truncated Wrapped Gaussian model, one for each class in its own tangent space. (2) In the second one we propose a novel learning algorithm that uses a vector representation formed by concatenating local coordinates in tangent spaces associated with different classes and training a linear SVM. We evaluate our approaches on three public 3D action datasets: MSR-action 3D, UT-kinect and UCF-kinect datasets; these datasets represent different kinds of challenges and together help provide an exhaustive evaluation. The results show that our approaches either match or exceed state-of-the-art performance reaching 91.21% on MSR-action 3D, 97.91% on UCF-kinect, and 88.5% on UT-kinect. Finally, we evaluate the latency, i.e. the ability to recognize an action before its termination, of our approach and demonstrate improvements relative to other published approaches. (C)2014 Elsevier Ltd. All rights reserved.
引用
收藏
页码:556 / 567
页数:12
相关论文
共 50 条
  • [1] Human action recognition by Grassmann manifold learning
    Rahimi, Sahere
    Aghagolzadeh, Ali
    Ezoji, Mehdi
    2015 9TH IRANIAN CONFERENCE ON MACHINE VISION AND IMAGE PROCESSING (MVIP), 2015, : 61 - 64
  • [2] Learning Action Images Using Deep Convolutional Neural Networks For 3D Action Recognition
    Thien Huynh-The
    Hua, Cam-Hao
    Kim, Dong-Seong
    2019 IEEE SENSORS APPLICATIONS SYMPOSIUM (SAS), 2019,
  • [3] Learning prototypes and similes on Grassmann manifold for spontaneous expression recognition
    Liu, Mengyi
    Wang, Ruiping
    Shan, Shiguang
    Chen, Xilin
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2016, 147 : 95 - 101
  • [4] 3d human motion tracking using manifold learning
    Guo, Feng
    Qian, Gang
    2007 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-7, 2007, : 357 - +
  • [5] 3D ear recognition using local salience and principal manifold
    Sun, Xiaopeng
    Wang, Guan
    Wang, Lu
    Sun, Hongyan
    Wei, Xiaopeng
    GRAPHICAL MODELS, 2014, 76 : 402 - 412
  • [6] Action recognition using 3D DAISY descriptor
    Xiaochun Cao
    Hua Zhang
    Chao Deng
    Qiguang Liu
    Hanyu Liu
    Machine Vision and Applications, 2014, 25 : 159 - 171
  • [7] 3D Object Recognition with Enhanced Grassmann Discriminant Analysis
    de Souza, Lincon Sales
    Hino, Hideitsu
    Fukui, Kazuhiro
    COMPUTER VISION - ACCV 2016 WORKSHOPS, PT III, 2017, 10118 : 345 - 359
  • [8] Effective 3D action recognition using EigenJoints
    Yang, Xiaodong
    Tian, YingLi
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2014, 25 (01) : 2 - 11
  • [9] Action recognition using 3D DAISY descriptor
    Cao, Xiaochun
    Zhang, Hua
    Deng, Chao
    Liu, Qiguang
    Liu, Hanyu
    MACHINE VISION AND APPLICATIONS, 2014, 25 (01) : 159 - 171
  • [10] 3D SPARSE QUANTIZATION FOR FEATURE LEARNING IN ACTION RECOGNITION
    Zhao, Yang
    Cheng, Hong
    Yang, Lu
    2015 IEEE CHINA SUMMIT & INTERNATIONAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING, 2015, : 263 - 267