Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling

被引:5
|
作者
Korivand, Soroush [1 ,2 ]
Jalili, Nader [1 ]
Gong, Jiaqi [2 ]
机构
[1] Univ Alabama, Dept Mech Engn, Tuscaloosa, AL 35401 USA
[2] Univ Alabama, Dept Comp Sci, Tuscaloosa, AL 35401 USA
关键词
reinforcement learning; locomotion disorder; IMU sensor; musculoskeletal simulation; MUSCLE CONTRIBUTIONS; DYNAMIC SIMULATIONS; OPTIMIZATION; SUPPORT; LEVEL; KNEE; ARM;
D O I
10.3390/s23052698
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Locomotor impairment is a highly prevalent and significant source of disability and significantly impacts the quality of life of a large portion of the population. Despite decades of research on human locomotion, challenges remain in simulating human movement to study the features of musculoskeletal drivers and clinical conditions. Most recent efforts to utilize reinforcement learning (RL) techniques are promising in the simulation of human locomotion and reveal musculoskeletal drives. However, these simulations often fail to mimic natural human locomotion because most reinforcement strategies have yet to consider any reference data regarding human movement. To address these challenges, in this study, we designed a reward function based on the trajectory optimization rewards (TOR) and bio-inspired rewards, which includes the rewards obtained from reference motion data captured by a single Inertial Moment Unit (IMU) sensor. The sensor was equipped on the participants' pelvis to capture reference motion data. We also adapted the reward function by leveraging previous research on walking simulations for TOR. The experimental results showed that the simulated agents with the modified reward function performed better in mimicking the collected IMU data from participants, which means that the simulated human locomotion was more realistic. As a bio-inspired defined cost, IMU data enhanced the agent's capacity to converge during the training process. As a result, the models' convergence was faster than those developed without reference motion data. Consequently, human locomotion can be simulated more quickly and in a broader range of environments, with a better simulation performance.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Modeling Individual Human Motor Behavior Through Model Reference Iterative Learning Control
    Zhou, Shou-Han
    Oetomo, Denny
    Tan, Ying
    Burdet, Etienne
    Mareels, Iven
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2012, 59 (07) : 1892 - 1901
  • [32] Multi-Objective Network Congestion Control via Constrained Reinforcement Learning
    Liu, Qiong
    Yang, Peng
    Lyu, Feng
    Zhang, Ning
    Yu, Li
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [33] Reinforcement learning solution for HJB equation arising in constrained optimal control problem
    Luo, Biao
    Wu, Huai-Ning
    Huang, Tingwen
    Liu, Derong
    NEURAL NETWORKS, 2015, 71 : 150 - 158
  • [34] Optimizing Cascaded Control of Mechatronic Systems through Constrained Residual Reinforcement Learning
    Staessens, Tom
    Lefebvre, Tom
    Crevecoeur, Guillaume
    MACHINES, 2023, 11 (03)
  • [35] Personalized robotic control via constrained multi-objective reinforcement learning
    He, Xiangkun
    Hu, Zhongxu
    Yang, Haohan
    Lv, Chen
    NEUROCOMPUTING, 2024, 565
  • [36] Stability Constrained Reinforcement Learning for Decentralized Real-Time Voltage Control
    Feng, Jie
    Shi, Yuanyuan
    Qu, Guannan
    Low, Steven H.
    Anandkumar, Anima
    Wierman, Adam
    IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2024, 11 (03): : 1370 - 1381
  • [37] Reinforcement learning establishes a minimal metacognitive process to monitor and control motor learning performance
    Sugiyama, Taisei
    Schweighofer, Nicolas
    Izawa, Jun
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [38] Toward a Reinforcement Learning Environment Toolbox for Intelligent Electric Motor Control
    Traue, Arne
    Book, Gerrit
    Kirchgassner, Wilhelm
    Wallscheid, Oliver
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (03) : 919 - 928
  • [39] Reinforcement learning establishes a minimal metacognitive process to monitor and control motor learning performance
    Taisei Sugiyama
    Nicolas Schweighofer
    Jun Izawa
    Nature Communications, 14
  • [40] Optimal control with deep reinforcement learning for shunt compensations to enhance voltage stability
    Cao, Shang
    Liao, Shiwu
    Wang, Shaorong
    Luo, Xiaotong
    2020 5TH ASIA CONFERENCE ON POWER AND ELECTRICAL ENGINEERING (ACPEE 2020), 2020, : 398 - 403