A data augmentation method for human action recognition using dense joint motion images

被引:12
|
作者
Yao, Leiyue [1 ]
Yang, Wei [2 ]
Huang, Wei [1 ]
机构
[1] Nanchang Univ, Sch Informat Engn, 999 Xuefu Rd, Nanchang 330031, Jiangxi, Peoples R China
[2] Jiangxi Univ Technol, Ctr Collaborat & Innovat, 99 ZiYang Rd, Nanchang 330098, Jiangxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Human action recognition; Motion image; Action encoding; Few-shot learning; Skeleton-based action recognition; ATTENTION;
D O I
10.1016/j.asoc.2020.106713
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Lower Limb Action Recognition with Motion Data of a Human Joint
    Liang, Feng
    Zhang, Zhili
    Li, Xiangyang
    Tong, Zhao
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2016, 41 (12) : 5111 - 5121
  • [2] Lower Limb Action Recognition with Motion Data of a Human Joint
    Feng Liang
    Zhili Zhang
    Xiangyang Li
    Zhao Tong
    Arabian Journal for Science and Engineering, 2016, 41 : 5111 - 5121
  • [3] Data Augmentation and Dense-LSTM for Human Activity Recognition Using WiFi Signal
    Zhang, Jin
    Wu, Fuxiang
    Wei, Bo
    Zhang, Qieshi
    Huang, Hui
    Shah, Syed W.
    Cheng, Jun
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (06) : 4628 - 4641
  • [4] Recognition of human action in motion detected images with GMACA
    Peldek, Serkan
    Becerikli, Yasar
    Journal of the Faculty of Engineering and Architecture of Gazi University, 2019, 34 (02): : 1025 - 1043
  • [5] Recognition of human action in motion detected images with GMACA
    Peldek, Serkan
    Becerikli, Yasar
    JOURNAL OF THE FACULTY OF ENGINEERING AND ARCHITECTURE OF GAZI UNIVERSITY, 2019, 34 (02): : 1026 - 1043
  • [6] Joint Motion Similarity (JMS)-Based Human Action Recognition using Kinect
    Li, Jiawei
    Chen, Jianxin
    Sun, Linhui
    2016 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2016, : 206 - 213
  • [7] Human action recognition in videos using structure similarity of aligned motion images
    Al-Ali, Salim
    Milanova, Mariofonna
    International Journal of Reasoning-based Intelligent Systems, 2014, 6 (1-2) : 71 - 82
  • [8] Human Action Recognition Based on Dense Sampling of Motion Boundary and Histogram of Motion Gradient
    Fan, Min
    Han, Qi
    Zhang, Xi
    Liu, Yaling
    Chen, Huan
    Hu, Yaqian
    PROCEEDINGS OF 2018 IEEE 7TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE (DDCLS), 2018, : 1033 - 1038
  • [9] Human Motion Recognition Using Directional Motion History Images
    Murakami, Makoto
    Tan, Joo Kooi
    Kim, Hyoungseop
    Ishikawa, Seiji
    INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2010), 2010, : 1445 - 1449
  • [10] Improvement of recognition rate using data augmentation with blurred images
    Ishikawa, Shiori
    Chiyonobu, Miho
    Iida, Sayaka
    Takata, Masami
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (09): : 12154 - 12165