Human action recognition: a construction of codebook by discriminative features selection approach

被引:20
|
作者
Siddiqui, Samra [1 ]
Khan, Muhammad Attique [1 ,2 ]
Bashir, Khalid [3 ]
Sharif, Muhammad [1 ]
Azam, Faisal [1 ]
Javed, Muhammad Younus [2 ]
机构
[1] COMSATS Univ Islamabad, Dept Comp Sci, Wah Campus, Wah Cantt, Pakistan
[2] HITEC Univ, Dept Comp Sci & Engn, Museum Rd, Taxila, Pakistan
[3] Islamic Univ Madinah, Fac Comp & Informat Syst, Medina, Saudi Arabia
关键词
human activity recognition; HAR; motion history image; MHI; object recognition; silhouette extraction;
D O I
10.1504/IJAPR.2018.094815
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human activity recognition (HAR) has significance in the domain of pattern recognition. HAR handles the complexity of human physical changes and heterogeneous formats of same human actions performed under dissimilar subjects. This research contributes a unique method focusing on the changes in human movement. The purpose is to identify and categorise human actions from video sequences. The interest points (IPs) are extracted from the subject video and motion history images (MHIs) are constructed and analysed after image segmentation. Discriminative features (DFs) are selected and the visual vocabulary is learned from the extracted DF (EDF). The EDF are then quantised by using visual vocabulary and images are represented based upon frequencies of visual words (VW). VW are formed from the EDF and then, a histogram of VW is developed based on the feature vectors extracted from MHI. These feature vectors are used for training support vector machine (SVM) for the classification of actions into various categories. Benchmark datasets like KTH, Weizmann and HMDB51 are used for evaluation and comparison with existing action recognition approaches depicts the better performance of adopted strategy.
引用
收藏
页码:206 / 228
页数:23
相关论文
共 50 条
  • [1] Learning Discriminative Visual Codebook for Human Action Recognition
    Lei, Qing
    Li, Shao-zi
    Zhang, Hong-bo
    JOURNAL OF COMPUTERS, 2013, 8 (12) : 3093 - 3102
  • [2] A discriminative prototype selection approach for graph embedding in human action recognition
    Borzeshi, Ehsan Zare
    Piccardi, Massimo
    Da Xu, Richard Yi
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), 2011,
  • [3] Discriminative Part Selection for Human Action Recognition
    Zhang, Shiwei
    Gao, Changxin
    Zhang, Jing
    Chen, Feifei
    Sang, Nong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2018, 20 (04) : 769 - 780
  • [4] Action recognition via structured codebook construction
    Zhou, Wen
    Wang, Chunheng
    Xiao, Baihua
    Zhang, Zhong
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2014, 29 (04) : 546 - 555
  • [5] A discriminative representation for human action recognition
    Yuan, Yuan
    Zheng, Xiangtao
    Lu, Xiaoqiang
    PATTERN RECOGNITION, 2016, 59 : 88 - 97
  • [6] Discriminative two-level feature selection for realistic human action recognition
    Wu, Qiuxia
    Wang, Zhiyong
    Deng, Feiqi
    Yong, Xia
    Kang, Wenxiong
    Feng, David Dagan
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2013, 24 (07) : 1064 - 1074
  • [7] Action Recognition with Discriminative Mid-Level Features
    Liu, Cuiwei
    Kong, Yu
    Wu, Xinxiao
    Jia, Yunde
    2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), 2012, : 3366 - 3369
  • [8] Learning Discriminative Convolutional Features for Skeletal Action Recognition
    Xu, Jinhua
    Xiang, Yang
    Hu, Lizhang
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT III, 2017, 10636 : 564 - 574
  • [9] Learning a Hierarchy of Discriminative Space-Time Neighborhood Features for Human Action Recognition
    Kovashka, Adriana
    Grauman, Kristen
    2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, : 2046 - 2053
  • [10] Human Action Recognition Based on Discriminative Supervoxels
    Guo, Yanan
    Ma, Wei
    Duan, Lijuan
    En, Qing
    Chen, Juncheng
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 3863 - 3869