A Hierarchical-Based Learning Approach for Multi-Action Intent Recognition

被引:0
|
作者
Hollinger, David [1 ]
Pollard, Ryan S. [1 ]
Schall Jr, Mark C. [2 ]
Chen, Howard [3 ]
Zabala, Michael [1 ]
机构
[1] Auburn Univ, Dept Mech Engn, Auburn, AL 36849 USA
[2] Auburn Univ, Dept Ind & Syst Engn, Auburn, AL 36849 USA
[3] Univ Alabama, Dept Ind & Syst Engn & Engn Management, Huntsville, AL 35899 USA
关键词
wearable sensors; accelerometers; gyroscopes; movement intent prediction;
D O I
10.3390/s24237857
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Recent applications of wearable inertial measurement units (IMUs) for predicting human movement have often entailed estimating action-level (e.g., walking, running, jumping) and joint-level (e.g., ankle plantarflexion angle) motion. Although action-level or joint-level information is frequently the focus of movement intent prediction, contextual information is necessary for a more thorough approach to intent recognition. Therefore, a combination of action-level and joint-level information may offer a more comprehensive approach to predicting movement intent. In this study, we devised a novel hierarchical-based method combining action-level classification and subsequent joint-level regression to predict joint angles 100 ms into the future. K-nearest neighbors (KNN), bidirectional long short-term memory (BiLSTM), and temporal convolutional network (TCN) models were employed for action-level classification, and a random forest model trained on action-specific IMU data was used for joint-level prediction. A joint-level action-generic model trained on multiple actions (e.g., backward walking, kneeling down, kneeling up, running, and walking) was also used for predicting the joint angle. Compared with a hierarchical-based approach, the action-generic model had lower prediction error for backward walking, kneeling down, and kneeling up. Although the TCN and BiLSTM classifiers achieved classification accuracies of 89.87% and 89.30%, respectively, they did not surpass the performance of the action-generic random forest model when used in combination with an action-specific random forest model. This may have been because the action-generic approach was trained on more data from multiple actions. This study demonstrates the advantage of leveraging large, disparate data sources over a hierarchical-based approach for joint-level prediction. Moreover, it demonstrates the efficacy of an IMU-driven, task-agnostic model in predicting future joint angles across multiple actions.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] A Hierarchical Multi-Action Deep Reinforcement Learning Method for Dynamic Distributed Job-Shop Scheduling Problem With Job Arrivals
    Huang, Jiang-Ping
    Gao, Liang
    Li, Xin-Yu
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 : 2501 - 2513
  • [22] A Hierarchical Multi-Action Deep Reinforcement Learning Method for Dynamic Distributed Job-Shop Scheduling Problem With Job Arrivals
    Huang, Jiang-Ping
    Gao, Liang
    Li, Xin-Yu
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 : 2501 - 2513
  • [23] Human Action Recognition Based on Transfer Learning Approach
    Abdulazeem, Yousry
    Balaha, Hossam Magdy
    Bahgat, Waleed M.
    Badawy, Mahmoud
    IEEE ACCESS, 2021, 9 : 82058 - 82069
  • [24] Learning hierarchical video representation for action recognition
    Li Q.
    Qiu Z.
    Yao T.
    Mei T.
    Rui Y.
    Luo J.
    International Journal of Multimedia Information Retrieval, 2017, 6 (1) : 85 - 98
  • [25] (3Ts) Green conservation framework: A hierarchical-based sustainability approach
    Shehata, Alaa O.
    Megahed, Naglaa A.
    Shahda, Merhan M.
    Hassan, Asmaa M.
    BUILDING AND ENVIRONMENT, 2022, 224
  • [26] Hierarchical Clustering Multi-Task Learning for Joint Human Action Grouping and Recognition
    Liu, An-An
    Su, Yu-Ting
    Nie, Wei-Zhi
    Kankanhalli, Mohan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (01) : 102 - 114
  • [27] Multi-expert human action recognition with hierarchical super-class learning
    Dehkordi, Hojat Asgarian
    Nezhad, Ali Soltani
    Kashiani, Hossein
    Shokouhi, Shahriar Baradaran
    Ayatollahi, Ahmad
    KNOWLEDGE-BASED SYSTEMS, 2022, 250
  • [28] Hierarchical Bayesian Multiple Kernel Learning Based Feature Fusion for Action Recognition
    Sun, Wen
    Yuan, Chunfeng
    Wang, Pei
    Yang, Shuang
    Hu, Weiming
    Cai, Zhaoquan
    MULTIMODAL PATTERN RECOGNITION OF SOCIAL SIGNALS IN HUMAN-COMPUTER-INTERACTION, MPRSS 2016, 2017, 10183 : 85 - 97
  • [29] Fast Realistic Multi-Action Recognition using Mined Dense Spatio-temporal Features
    Gilbert, Andrew
    Illingworth, John
    Bowden, Richard
    2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2009, : 925 - 931
  • [30] A Multi-Task Hierarchical Approach for Intent Detection and Slot Filling
    Firdaus, Mauajama
    Kumar, Ankit
    Ekbal, Asif
    Bhattacharyya, Pushpak
    KNOWLEDGE-BASED SYSTEMS, 2019, 183