A method for human action recognition

被引:89
|
作者
Masoud, O [1 ]
Papanikolopoulos, N [1 ]
机构
[1] Univ Minnesota, Dept Comp Sci & Engn, Minneapolis, MN 55455 USA
关键词
motion recognition; human tracking; articulated motion;
D O I
10.1016/S0262-8856(03)00068-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This article deals with the problem of classification of human activities from video. Our approach uses motion features that are computed very efficiently, and subsequently projected into a lower dimensional space where matching is performed. Each action is represented as a manifold in this lower dimensional space and matching is done by comparing these manifolds. To demonstrate the effectiveness of this approach. it was used on a large data set of similar actions, each performed by many different actors. Classification results were very accurate and show that this approach is robust to challenges such as variations in performers' physical attributes, color of clothing, and style of motion. An important result of this article is that the recovery of the three-dimensional properties of a moving person, or even the two-dimensional tracking of the person's limbs need not precede action recognition. (C) 2003 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:729 / 743
页数:15
相关论文
共 50 条
  • [31] Human action recognition method based on historical point cloud trajectory characteristics
    Li, Donglu
    Jahan, Hosney
    Huang, Xiaoyi
    Feng, Ziliang
    VISUAL COMPUTER, 2022, 38 (08): : 2971 - 2979
  • [32] Human action recognition method based on historical point cloud trajectory characteristics
    Donglu Li
    Hosney Jahan
    Xiaoyi Huang
    Ziliang Feng
    The Visual Computer, 2022, 38 : 2971 - 2979
  • [33] Human action recognition method based on Motion Excitation and Temporal Aggregation module
    Ye, Qing
    Tan, Zexian
    Zhang, Yongmei
    HELIYON, 2022, 8 (11)
  • [34] Silhouette-based method for object classification and human action recognition in video
    Dedeoglu, Yigithan
    Toreyin, B. Ugur
    Gudukbay, Ugur
    Cetin, A. Enis
    COMPUTER VISION IN HUMAN-COMPUTER INTERACTION, 2006, 3979 : 64 - 77
  • [35] Human Action Recognition Bases on Local Action Attributes
    Zhang, Jing
    Lin, Hong
    Nie, Weizhi
    Chaisorn, Lekha
    Wong, Yongkang
    Kankanhalli, Mohan S.
    JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, 2015, 10 (03) : 1264 - 1274
  • [36] Human Action Recognition Using Action Trait Code
    Lin, Shih-Yao
    Shie, Chuen-Kai
    Chen, Shen-Chi
    Lee, Ming-Sui
    Hung, Yi-Ping
    2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), 2012, : 3456 - 3459
  • [37] Deep Learning for Human Action Recognition
    Shekokar, R. U.
    Kale, S. N.
    2021 6TH INTERNATIONAL CONFERENCE FOR CONVERGENCE IN TECHNOLOGY (I2CT), 2021,
  • [38] Chaotic invariants for human action recognition
    Ali, Saad
    Basharat, Arslan
    Shah, Mubarak
    2007 IEEE 11TH INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS 1-6, 2007, : 1703 - 1710
  • [39] Human action recognition on depth dataset
    Zan Gao
    Hua Zhang
    Anan A. Liu
    Guangping Xu
    Yanbing Xue
    Neural Computing and Applications, 2016, 27 : 2047 - 2054
  • [40] Convex Deficiencies for Human Action Recognition
    Mabel Iglesias-Ham
    Edel Bartolo García-Reyes
    Walter George Kropatsch
    Nicole Maria Artner
    Journal of Intelligent & Robotic Systems, 2011, 64 : 353 - 364