Preference-Based Assistance Map Learning With Robust Adaptive Oscillators

被引:1
|
作者
Li, Shilei [1 ,2 ]
Zou, Wulin [1 ,2 ]
Duan, Pu [2 ]
Shi, Ling [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[2] Xeno Dynam Co Ltd, Control Dept, Shenzhen 518055, Peoples R China
来源
关键词
Robust adaptive oscillators; Gaussian process regression; muscle activities; hip exoskeleton; HUMAN WALKING; GAIT; OPTIMIZATION; COST;
D O I
10.1109/TMRB.2022.3206609
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Recently, lower-limb exoskeletons have demonstrated the ability to enhance human mobility by reducing biological efforts with human-in-the-loop (HIL) optimization. However, this technology is confined to the laboratory, and it is difficult to generalize to daily applications where gaits are more complex and professional equipment is not accessible. To solve this issue, firstly, we present a robust adaptive oscillator (RAO) to synchronize the human-robot movement and extract gait features. Then, we use the Gaussian process regression (GPR) to map the subjects' preferred assistance parameters to gait features. Experiments show that the RAO has a faster convergence rate compared with the traditional adaptive oscillators. Meanwhile, the learning efficiency of the proposed method shows superiority compared with the HIL optimization. The effectiveness of the proposed method is validated by a hip exoskeleton at a speed of 5 km/h with 7 participants. Three muscles which include rectus femoris, tibialis anterior, and medial gastrocnemius are investigated in three conditions: user-preferred assistance (ASS), zero torque (ZT), and normal walking (NW). The results show that all muscles achieve an activity reduction in ASS mode compared with ZT or NW. Meanwhile, there is a statistically significant difference on medial gastrocnemius in ASS mode with respect to both ZT and NW (-15.63 +/- 6.51 % and -8.73 +/- 6.40%, respectively).
引用
收藏
页码:1000 / 1009
页数:10
相关论文
共 50 条
  • [31] APReL: A Library for Active Preference-based Reward Learning Algorithms
    Biyik, Erdem
    Talati, Aditi
    Sadigh, Dorsa
    PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), 2022, : 613 - 617
  • [32] Preference-based decision making for personalised access to Learning Resources
    Department of Special Education, University of Thessaly, Argonafton and Filellinon Street, Volos, GR 38221, Greece
    不详
    不详
    Int. J. Auton. Adapt. Commun. Syst., 2008, 3 (356-369):
  • [33] Contextual Bandits and Imitation Learning with Preference-Based Active Queries
    Sekhari, Ayush
    Sridharan, Karthik
    Sun, Wen
    Wu, Runzhe
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [34] Efficient Meta Reinforcement Learning for Preference-based Fast Adaptation
    Ren, Zhizhou
    Liu, Anji
    Liang, Yitao
    Peng, Jian
    Ma, Jianzhu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [35] A Policy Iteration Algorithm for Learning from Preference-Based Feedback
    Wirth, Christian
    Furnkranz, Johannes
    ADVANCES IN INTELLIGENT DATA ANALYSIS XII, 2013, 8207 : 427 - 437
  • [36] Active Preference-Based Gaussian Process Regression for Reward Learning
    Biyik, Lirdem
    Huynh, Nicolas
    Kochenderfer, Mykel J.
    Sadigh, Dorsa
    ROBOTICS: SCIENCE AND SYSTEMS XVI, 2020,
  • [37] Adaptive preferences, self-expression and preference-based freedom rankings
    Costella, Annalisa
    ECONOMICS & PHILOSOPHY, 2024, 40 (03): : 513 - 534
  • [38] Preference-based Reinforcement Learning with Finite-Time Guarantees
    Xu, Yichong
    Wang, Ruosong
    Yang, Lin F.
    Singh, Aarti
    Dubrawski, Artur
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [39] Energy-efficient Adaptive Communication by Preference-based Routing and Forecasting
    Reissner, Daniel
    Caspar, Mirko
    Hardt, Wolfram
    Strakosch, Florian
    Derbel, Faouzi
    2014 11TH INTERNATIONAL MULTI-CONFERENCE ON SYSTEMS, SIGNALS & DEVICES (SSD), 2014,
  • [40] Preference-based belief operators
    Asheim, GB
    Sovik, Y
    MATHEMATICAL SOCIAL SCIENCES, 2005, 50 (01) : 61 - 82