A Hierarchical-Based Learning Approach for Multi-Action Intent Recognition

被引:0
|
作者
Hollinger, David [1 ]
Pollard, Ryan S. [1 ]
Schall Jr, Mark C. [2 ]
Chen, Howard [3 ]
Zabala, Michael [1 ]
机构
[1] Auburn Univ, Dept Mech Engn, Auburn, AL 36849 USA
[2] Auburn Univ, Dept Ind & Syst Engn, Auburn, AL 36849 USA
[3] Univ Alabama, Dept Ind & Syst Engn & Engn Management, Huntsville, AL 35899 USA
关键词
wearable sensors; accelerometers; gyroscopes; movement intent prediction;
D O I
10.3390/s24237857
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Recent applications of wearable inertial measurement units (IMUs) for predicting human movement have often entailed estimating action-level (e.g., walking, running, jumping) and joint-level (e.g., ankle plantarflexion angle) motion. Although action-level or joint-level information is frequently the focus of movement intent prediction, contextual information is necessary for a more thorough approach to intent recognition. Therefore, a combination of action-level and joint-level information may offer a more comprehensive approach to predicting movement intent. In this study, we devised a novel hierarchical-based method combining action-level classification and subsequent joint-level regression to predict joint angles 100 ms into the future. K-nearest neighbors (KNN), bidirectional long short-term memory (BiLSTM), and temporal convolutional network (TCN) models were employed for action-level classification, and a random forest model trained on action-specific IMU data was used for joint-level prediction. A joint-level action-generic model trained on multiple actions (e.g., backward walking, kneeling down, kneeling up, running, and walking) was also used for predicting the joint angle. Compared with a hierarchical-based approach, the action-generic model had lower prediction error for backward walking, kneeling down, and kneeling up. Although the TCN and BiLSTM classifiers achieved classification accuracies of 89.87% and 89.30%, respectively, they did not surpass the performance of the action-generic random forest model when used in combination with an action-specific random forest model. This may have been because the action-generic approach was trained on more data from multiple actions. This study demonstrates the advantage of leveraging large, disparate data sources over a hierarchical-based approach for joint-level prediction. Moreover, it demonstrates the efficacy of an IMU-driven, task-agnostic model in predicting future joint angles across multiple actions.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Multi-Modal Multi-Action Video Recognition
    Shi, Zhensheng
    Liang, Ju
    Li, Qianqian
    Zheng, Haiyong
    Gu, Zhaorui
    Dong, Junyu
    Zheng, Bing
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 13658 - 13667
  • [2] Feature learning via multi-action forms supervising force for face recognition
    Zhengzheng Sun
    Lianfang Tian
    Qiliang Du
    Jameel A. Bhutto
    Zhaolin Wang
    Neural Computing and Applications, 2022, 34 : 4425 - 4436
  • [3] Feature learning via multi-action forms supervising force for face recognition
    Sun, Zhengzheng
    Tian, Lianfang
    Du, Qiliang
    Bhutto, Jameel A.
    Wang, Zhaolin
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (06): : 4425 - 4436
  • [4] A Hierarchical Learning Approach for Human Action Recognition
    Lemieux, Nicolas
    Noumeir, Rita
    SENSORS, 2020, 20 (17) : 1 - 16
  • [5] An offline-to-online reinforcement learning approach based on multi-action evaluation with policy extension
    Cheng, Xuebo
    Huang, Xiaohui
    Huang, Zhichao
    Jiang, Nan
    APPLIED INTELLIGENCE, 2024, 54 (23) : 12246 - 12271
  • [6] Exploring Multi-action Relationship in Reinforcement Learning
    Wang, Han
    Yu, Yang
    PRICAI 2016: TRENDS IN ARTIFICIAL INTELLIGENCE, 2016, 9810 : 574 - 587
  • [7] A hierarchical approach for efficient multi-intent dialogue policy learning
    Tulika Saha
    Dhawal Gupta
    Sriparna Saha
    Pushpak Bhattacharyya
    Multimedia Tools and Applications, 2021, 80 : 35025 - 35050
  • [8] A hierarchical approach for efficient multi-intent dialogue policy learning
    Saha, Tulika
    Gupta, Dhawal
    Saha, Sriparna
    Bhattacharyya, Pushpak
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (28-29) : 35025 - 35050
  • [9] Offline Multi-Action Policy Learning: Generalization and Optimization
    Zhou, Zhengyuan
    Athey, Susan
    Wager, Stefan
    OPERATIONS RESEARCH, 2023, 71 (01) : 148 - 183
  • [10] Robot multi-action cooperative grasping strategy based on deep reinforcement learning
    He, Huiteng
    Zhou, Yong
    Hu, Kaixiong
    Li, Weidong
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2024, 30 (05): : 1789 - 1797