A Novel CNN-BiLSTM-GRU Hybrid Deep Learning Model for Human Activity Recognition

被引:0
|
作者
Lalwani, Pooja [1 ]
Ganeshan, R. [1 ]
机构
[1] VIT Bhopal Univ, Sch Comp Sci & Engn, Sehore 466114, Madhya Pradesh, India
关键词
Human activity recognition; Deep learning models; Bipedal robots; Accelerometer; Sensors; Convolutional neural networks; Long short-term memory; Bidirectional long short-term memory; Smartphone; CLASSIFICATION; NETWORK;
D O I
10.1007/s44196-024-00689-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human Activity Recognition (HAR) is critical in a variety of disciplines, including healthcare and robotics. This paper presents a new Convolutional Neural Network with Bidirectional Long Short-Term Memory and along with Gated Recurrent Unit (CNN-BiLSTM-GRU)hybrid deep learning model designed for Human Activity Recognition (HAR) that makes use of data from wearable sensors and mobile devices. Surprisingly, the model achieves an amazing accuracy rate of 99.7% on the difficult Wireless Sensor Data Mining (WISDM) dataset, demonstrating its ability to properly identify human behaviors. This study emphasizes parameter optimization, with a focus on batch size 0.3 as a significant component in improving the model's robustness. Furthermore, the findings of this study have far-reaching implications for bipedal robotics, where precise HAR (Human Activity Recognition) is critical to improving human-robot interaction quality and overall work efficiency. These discoveries not only strengthen Human Activity Recognition (HAR) techniques, but also provide practical benefits in real-world applications, particularly in the robotics and healthcare areas. This study thus makes a significant contribution to the continuous development of Human Activity Recognition methods and their actual applications, emphasizing their important role in stimulating innovation and efficiency across a wide range of industries.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] Hybrid CNN-GRU Model for High Efficient Handwritten Digit Recognition
    Vantruong Nguyen
    Cai, Jueping
    Chu, Jie
    2019 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND PATTERN RECOGNITION (AIPR 2019), 2019, : 66 - 71
  • [32] Hybrid Deep Learning Models for Tennis Action Recognition: Enhancing Professional Training Through CNN-BiLSTM Integration
    Chen, Zhaokun
    Xie, Qin
    Jiang, Wei
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2025, 37 (6-8):
  • [33] A hybrid deep learning model for UWB radar-based human activity recognition
    Khan, Irfanullah
    Guerrieri, Antonio
    Serra, Edoardo
    Spezzano, Giandomenico
    INTERNET OF THINGS, 2025, 29
  • [34] Wearable Sensor-Based Human Activity Recognition with Hybrid Deep Learning Model
    Luwe, Yee Jia
    Lee, Chin Poo
    Lim, Kian Ming
    INFORMATICS-BASEL, 2022, 9 (03):
  • [35] ENHANCING HUMAN ACTIVITY RECOGNITION THROUGH SENSOR FUSION AND HYBRID DEEP LEARNING MODEL
    Tarekegn, Adane Nega
    Ullah, Mohib
    Cheikh, Faouzi Alaya
    Sajjad, Muhammad
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [36] Wearable sensors for human activity recognition based on a self-attention CNN-BiLSTM model
    Guo, Huafeng
    Xiang, Changcheng
    Chen, Shiqiang
    SENSOR REVIEW, 2023, 43 (5/6) : 347 - 358
  • [37] A Hybrid Deep Model Using Deep Learning and Dense Optical Flow Approaches for Human Activity Recognition
    Tanberk, Senem
    Kilimci, Zeynep Hilal
    Tukel, Dilek Bilgin
    Uysal, Mitat
    Akyokus, Selim
    IEEE ACCESS, 2020, 8 : 19799 - 19809
  • [38] Evaluation of deep learning model for human activity recognition
    Bhat, Owais
    Khan, Dawood A.
    EVOLVING SYSTEMS, 2022, 13 (01) : 159 - 168
  • [39] Evaluation of deep learning model for human activity recognition
    Owais Bhat
    Dawood A Khan
    Evolving Systems, 2022, 13 : 159 - 168
  • [40] CNN-based hybrid deep learning framework for human activity classification
    Ahmad, Naeem
    Ghosh, Sunit
    Rout, Jitendra Kumar
    INTERNATIONAL JOURNAL OF SENSOR NETWORKS, 2024, 44 (02) : 74 - 83