Human Motion Pose Rapid Tracking Using Improved Deep Reinforcement Learning and Multimodal Fusion

被引:0
|
作者
Li, Zhipeng [1 ]
Yang, Zengbao [2 ]
Yang, Ruizhu [2 ]
Wang, Nan [2 ]
Song, Wenli [2 ]
Zhang, Xingfu [3 ]
机构
[1] Harbin Sport Univ, Winter Olymp Coll, Harbin 150008, Peoples R China
[2] Harbin Sport Univ, Coll Phys Educ & Training, Harbin 150008, Peoples R China
[3] Heilongjiang Inst Technol, Comp Sci & Technol, Harbin 150050, Peoples R China
关键词
Human motion pose; multimodal fusion; deep reinforcement learning; human tracking; ALGORITHM;
D O I
10.1142/S0219467827500331
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Rapid human motion pose tracking has extensive applications in fields such as motion capture, intelligent monitoring, sports training, and physical health management. It can provide accurate data support, enhance safety monitoring, optimize training outcomes, and promote physical health. Traditional human pose tracking methods predominantly rely on either sensors or images for tracking, which often results in issues like low tracking accuracy and slow tracking speed. To address these problems, a rapid human motion pose tracking method based on improved deep reinforcement learning and multimodal fusion is proposed. First, this paper designs an overall architecture for rapid human motion pose tracking and utilizes a combination of monocular vision and sensors to extract and collect human motion data. Second, it constructs a complementary filter-based multimodal data fusion method to merge the multimodal data and extract the fused features. Finally, a multi-level attention network is employed to enhance the deep reinforcement learning network, using the fused features as input for training to achieve rapid human motion pose tracking. The results show that the proposed method can achieve efficient and stable human motion pose tracking in complex scenes, with a tracking accuracy of up to 85% and a shortest tracking time of 72ms, which has practical application value.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Estimation on Human Motion Posture Using Improved Deep Reinforcement Learning
    Ma, Wenjing
    Zhao, Jianguang
    Zhu, Guangquan
    Journal of Computers (Taiwan), 2023, 34 (04) : 97 - 110
  • [2] Multimodal Biometrics Fusion Algorithm Using Deep Reinforcement Learning
    Huang, Quan
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2022, 2022
  • [3] Multifeature Fusion Human Motion Behavior Recognition Algorithm Using Deep Reinforcement Learning
    Lu, Chengkun
    MOBILE INFORMATION SYSTEMS, 2021, 2021
  • [4] Deep Reinforcement Learning for Active Human Pose Estimation
    Gartner, Erik
    Pirinen, Aleksis
    Sminchisescu, Cristian
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 10835 - 10844
  • [5] Real-time pose estimation and motion tracking for motion performance using deep learning models
    Liu, Long
    Dai, Yuxin
    Liu, Zhihao
    JOURNAL OF INTELLIGENT SYSTEMS, 2024, 33 (01)
  • [6] Abnormal Behavior Recognition for Human Motion Based on Improved Deep Reinforcement Learning
    Duan, Xueying
    INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2023,
  • [7] Abnormal Behavior Recognition for Human Motion Based on Improved Deep Reinforcement Learning
    Duan, Xueying
    INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2023, 24 (01)
  • [8] RETRACTED: Gesture Tracking and Recognition Algorithm for Dynamic Human Motion Using Multimodal Deep Learning (Retracted Article)
    Xia, Zhonghua
    Xing, Jinming
    Li, Xiaofeng
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [9] Recognition of human motion with deep reinforcement learning
    Seok W.
    Park C.
    IEIE Transactions on Smart Processing and Computing, 2018, 7 (03): : 245 - 250
  • [10] MoDeep: A Deep Learning Framework Using Motion Features for Human Pose Estimation
    Jain, Arjun
    Tompson, Jonathan
    LeCun, Yann
    Bregler, Christoph
    COMPUTER VISION - ACCV 2014, PT II, 2015, 9004 : 302 - 315