Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling

被引:5
|
作者
Korivand, Soroush [1 ,2 ]
Jalili, Nader [1 ]
Gong, Jiaqi [2 ]
机构
[1] Univ Alabama, Dept Mech Engn, Tuscaloosa, AL 35401 USA
[2] Univ Alabama, Dept Comp Sci, Tuscaloosa, AL 35401 USA
关键词
reinforcement learning; locomotion disorder; IMU sensor; musculoskeletal simulation; MUSCLE CONTRIBUTIONS; DYNAMIC SIMULATIONS; OPTIMIZATION; SUPPORT; LEVEL; KNEE; ARM;
D O I
10.3390/s23052698
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Locomotor impairment is a highly prevalent and significant source of disability and significantly impacts the quality of life of a large portion of the population. Despite decades of research on human locomotion, challenges remain in simulating human movement to study the features of musculoskeletal drivers and clinical conditions. Most recent efforts to utilize reinforcement learning (RL) techniques are promising in the simulation of human locomotion and reveal musculoskeletal drives. However, these simulations often fail to mimic natural human locomotion because most reinforcement strategies have yet to consider any reference data regarding human movement. To address these challenges, in this study, we designed a reward function based on the trajectory optimization rewards (TOR) and bio-inspired rewards, which includes the rewards obtained from reference motion data captured by a single Inertial Moment Unit (IMU) sensor. The sensor was equipped on the participants' pelvis to capture reference motion data. We also adapted the reward function by leveraging previous research on walking simulations for TOR. The experimental results showed that the simulated agents with the modified reward function performed better in mimicking the collected IMU data from participants, which means that the simulated human locomotion was more realistic. As a bio-inspired defined cost, IMU data enhanced the agent's capacity to converge during the training process. As a result, the models' convergence was faster than those developed without reference motion data. Consequently, human locomotion can be simulated more quickly and in a broader range of environments, with a better simulation performance.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Modeling and control of human motor system with generalized predictive control
    Lan, L.
    Zhu, K. Y.
    Zhang, D. G.
    2006 IEEE CONFERENCE ON ROBOTICS, AUTOMATION AND MECHATRONICS, VOLS 1 AND 2, 2006, : 150 - +
  • [42] Reinforcement learning for human-robot shared control
    Li, Yanan
    Tee, Keng Peng
    Yan, Rui
    Ge, Shuzhi Sam
    ASSEMBLY AUTOMATION, 2020, 40 (01) : 105 - 117
  • [43] Soft control of human physiological signals by reinforcement learning
    Martins, W
    de Araújo, IZ
    IJCNN'01: INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2001, : 2501 - 2505
  • [44] Somatosensory working memory in human reinforcement-based motor learning
    Sidarta, Ananda
    van Vugt, Floris T.
    Ostry, David J.
    JOURNAL OF NEUROPHYSIOLOGY, 2018, 120 (06) : 3275 - 3286
  • [45] Modeling and Evaluation of Human Motor Learning by Finger Manipulandum
    Okasha, Amr
    Sengezer, Sabahat
    Ozdemir, Ozancan
    Yozgatligil, Ceylan
    Turgut, Ali E.
    Arikan, Kutluk B.
    SOCIAL ROBOTICS, ICSR 2022, PT I, 2022, 13817 : 325 - 334
  • [46] Transfer learning with Partially Constrained Models: Application to reinforcement learning of linked multicomponent robot system control
    Fernandez-Gauna, Borja
    Manuel Lopez-Guede, Jose
    Grana, Manuel
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2013, 61 (07) : 694 - 703
  • [47] Reinforcement Learning and Distributed Model Predictive Control for Conflict Resolution in Highly Constrained Spaces
    Shen, Xu
    Borrelli, Francesco
    2023 IEEE INTELLIGENT VEHICLES SYMPOSIUM, IV, 2023,
  • [48] Model predictive control for constrained robot manipulator visual servoing tuned by reinforcement learning
    Li, Jiashuai
    Peng, Xiuyan
    Li, Bing
    Sreeram, Victor
    Wu, Jiawei
    Chen, Ziang
    Li, Mingze
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2023, 20 (06) : 10495 - 10513
  • [49] Data-driven constrained reinforcement learning algorithm for path tracking control of hovercraft
    Wang, Yuanhui
    Zhou, Hua
    OCEAN ENGINEERING, 2024, 307
  • [50] Model-based Constrained Reinforcement Learning using Generalized Control Barrier Function
    Ma, Haitong
    Chen, Jianyu
    Eben, Shengbo
    Lin, Ziyu
    Guan, Yang
    Ren, Yangang
    Zheng, Sifa
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 4552 - 4559