Multi-input CNN-GRU based human activity recognition using wearable sensors

被引:226
|
作者
Dua, Nidhi [1 ]
Singh, Shiva Nand [1 ]
Semwal, Vijay Bhaskar [2 ]
机构
[1] NIT Jamshedpur, Dept ECE, Jamshedpur, Jharkhand, India
[2] MANIT Bhopal, Dept CSE, Bhopal, MP, India
关键词
Deep neural networks; Human activity recognition; CNN; Long short term memory (LSTM); GRU; TRAJECTORIES;
D O I
10.1007/s00607-021-00928-8
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Human Activity Recognition (HAR) has attracted much attention from researchers in the recent past. The intensification of research into HAR lies in the motive to understand human behaviour and inherently anticipate human intentions. Human activity data obtained via wearable sensors like gyroscope and accelerometer is in the form of time series data, as each reading has a timestamp associated with it. For HAR, it is important to extract the relevant temporal features from raw sensor data. Most of the approaches for HAR involves a good amount of feature engineering and data pre-processing, which in turn requires domain expertise. Such approaches are time-consuming and are application-specific. In this work, a Deep Neural Network based model, which uses Convolutional Neural Network, and Gated Recurrent Unit is proposed as an end-to-end model performing automatic feature extraction and classification of the activities as well. The experiments in this work were carried out using the raw data obtained from wearable sensors with nominal pre-processing and don't involve any handcrafted feature extraction techniques. The accuracies obtained on UCI-HAR, WISDM, and PAMAP2 datasets are 96.20%, 97.21%, and 95.27% respectively. The results of the experiments establish that the proposed model achieved superior classification performance than other similar architectures.
引用
收藏
页码:1461 / 1478
页数:18
相关论文
共 50 条
  • [1] Multi-input CNN-GRU based human activity recognition using wearable sensors
    Nidhi Dua
    Shiva Nand Singh
    Vijay Bhaskar Semwal
    Computing, 2021, 103 : 1461 - 1478
  • [2] A Multichannel CNN-GRU Model for Human Activity Recognition
    Lu, Limeng
    Zhang, Chuanlin
    Cao, Kai
    Deng, Tao
    Yang, Qianqian
    IEEE Access, 2022, 10 : 66797 - 66810
  • [3] A Multichannel CNN-GRU Model for Human Activity Recognition
    Lu, Limeng
    Zhang, Chuanlin
    Cao, Kai
    Deng, Tao
    Yang, Qianqian
    IEEE ACCESS, 2022, 10 : 66797 - 66810
  • [4] Human Activity Recognition Using Multi-input CNN Model with FFT Spectrograms
    Yaguchi, Kei
    Ikarigawa, Kazukiyo
    Kawasaki, Ryo
    Miyazaki, Wataru
    Morikawa, Yuki
    Ito, Chihiro
    Shuzo, Masaki
    Maeda, Eisaku
    UBICOMP/ISWC '20 ADJUNCT: PROCEEDINGS OF THE 2020 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2020 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2020, : 364 - 367
  • [5] Inception inspired CNN-GRU hybrid network for human activity recognition
    Nidhi Dua
    Shiva Nand Singh
    Vijay Bhaskar Semwal
    Sravan Kumar Challa
    Multimedia Tools and Applications, 2023, 82 : 5369 - 5403
  • [6] Inception inspired CNN-GRU hybrid network for human activity recognition
    Dua, Nidhi
    Singh, Shiva Nand
    Semwal, Vijay Bhaskar
    Challa, Sravan Kumar
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (04) : 5369 - 5403
  • [7] SkeletonNet: A CNN-GRU Deep Learning Framework for Human Activity Recognition using Skeleton Data
    Monika
    Singh, Pardeep
    Chand, Satish
    Alpana
    JOURNAL OF INFORMATION ASSURANCE AND SECURITY, 2023, 18 (02): : 39 - 47
  • [8] Trajectory Prediction and Intention Recognition Based on CNN-GRU
    Du, Jinghao
    Lu, Dongdong
    Li, Fei
    Liu, Ke
    Qiu, Xiaolan
    IEEE ACCESS, 2025, 13 : 26945 - 26957
  • [9] Multi-sensor human activity recognition using CNN and GRU
    Nafea, Ohoud
    Abdul, Wadood
    Muhammad, Ghulam
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2022, 11 (02) : 135 - 147
  • [10] Multi-sensor human activity recognition using CNN and GRU
    Ohoud Nafea
    Wadood Abdul
    Ghulam Muhammad
    International Journal of Multimedia Information Retrieval, 2022, 11 : 135 - 147