Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling

被引:5
|
作者
Korivand, Soroush [1 ,2 ]
Jalili, Nader [1 ]
Gong, Jiaqi [2 ]
机构
[1] Univ Alabama, Dept Mech Engn, Tuscaloosa, AL 35401 USA
[2] Univ Alabama, Dept Comp Sci, Tuscaloosa, AL 35401 USA
关键词
reinforcement learning; locomotion disorder; IMU sensor; musculoskeletal simulation; MUSCLE CONTRIBUTIONS; DYNAMIC SIMULATIONS; OPTIMIZATION; SUPPORT; LEVEL; KNEE; ARM;
D O I
10.3390/s23052698
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Locomotor impairment is a highly prevalent and significant source of disability and significantly impacts the quality of life of a large portion of the population. Despite decades of research on human locomotion, challenges remain in simulating human movement to study the features of musculoskeletal drivers and clinical conditions. Most recent efforts to utilize reinforcement learning (RL) techniques are promising in the simulation of human locomotion and reveal musculoskeletal drives. However, these simulations often fail to mimic natural human locomotion because most reinforcement strategies have yet to consider any reference data regarding human movement. To address these challenges, in this study, we designed a reward function based on the trajectory optimization rewards (TOR) and bio-inspired rewards, which includes the rewards obtained from reference motion data captured by a single Inertial Moment Unit (IMU) sensor. The sensor was equipped on the participants' pelvis to capture reference motion data. We also adapted the reward function by leveraging previous research on walking simulations for TOR. The experimental results showed that the simulated agents with the modified reward function performed better in mimicking the collected IMU data from participants, which means that the simulated human locomotion was more realistic. As a bio-inspired defined cost, IMU data enhanced the agent's capacity to converge during the training process. As a result, the models' convergence was faster than those developed without reference motion data. Consequently, human locomotion can be simulated more quickly and in a broader range of environments, with a better simulation performance.
引用
收藏
页数:16
相关论文
共 50 条
  • [21] Reinforcement learning speed control of a separately excited DC motor
    Benmakhlouf, Abdeslam
    Zidani, Ghania
    Djarah, Djalal
    ELEKTROTEHNISKI VESTNIK, 2024, 91 (05): : 257 - 264
  • [22] Fuzzy control based on reinforcement learning for voice coil motor
    Liu, T. S.
    Chang, W. K.
    2005 ICSC Congress on Computational Intelligence Methods and Applications (CIMA 2005), 2005, : 264 - 269
  • [23] An Intelligent Control Method for Servo Motor Based on Reinforcement Learning
    Gao, Depeng
    Wang, Shuai
    Yang, Yuwei
    Zhang, Haifei
    Chen, Hao
    Mei, Xiangxiang
    Chen, Shuxi
    Qiu, Jianlin
    ALGORITHMS, 2024, 17 (01)
  • [24] Reinforcement learning speed control of a separately excited DC motor
    Benmakhlouf, Abdeslam
    Zidani, Ghania
    Djarah, Djalal
    Elektrotehniski Vestnik/Electrotechnical Review, 2024, 91 (05): : 257 - 264
  • [25] Modeling of plant dynamics and control based on reinforcement learning
    Maeda, Tomoyuki
    Nakayama, Makishi
    Kitamura, Akira
    2006 SICE-ICASE INTERNATIONAL JOINT CONFERENCE, VOLS 1-13, 2006, : 3088 - +
  • [26] Modeling and control for plant dynamics based on reinforcement learning
    Maeda, Tomoyuki
    Nakayama, Makishi
    Narazaki, Hiroshi
    Kitamura, Akira
    IEEJ Transactions on Industry Applications, 2009, 129 (04): : 363 - 367
  • [27] Reinforcement Learning for Control of Human Locomotion in Simulation
    Dashkovets, Andrii
    Laschowski, Brokoslaw
    2024 10TH IEEE RAS/EMBS INTERNATIONAL CONFERENCE FOR BIOMEDICAL ROBOTICS AND BIOMECHATRONICS, BIOROB 2024, 2024, : 43 - 48
  • [28] Limit Action Space to Enhance Drone Control with Deep Reinforcement Learning
    Jang, Sooyoung
    Park, Noh-Sam
    11TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE: DATA, NETWORK, AND AI IN THE AGE OF UNTACT (ICTC 2020), 2020, : 1212 - 1215
  • [29] Reinforcement Learning based Approach for Virtual Inertia Control in Microgrids with Renewable Energy Sources
    Skiparev, Vjatseslav
    Belikov, Juri
    Petlenkov, Eduard
    2020 IEEE PES INNOVATIVE SMART GRID TECHNOLOGIES EUROPE (ISGT-EUROPE 2020): SMART GRIDS: KEY ENABLERS OF A GREEN POWER SYSTEM, 2020, : 1020 - 1024
  • [30] A Reinforcement Learning Approach for Fast Frequency Control in Low-Inertia Power Systems
    Stanojev, Ognjen
    Kundacina, Ognjen
    Markovic, Uros
    Vrettos, Evangelos
    Aristidou, Petros
    Hug, Gabriela
    2020 52ND NORTH AMERICAN POWER SYMPOSIUM (NAPS), 2021,