Study on force control for robot massage with a model-based reinforcement learning algorithm

被引:0
|
作者
Meng Xiao
Tie Zhang
Yanbiao Zou
Xiaohu Yan
Wen Wu
机构
[1] Southern Medical University,Department of Rehabilitation, Zhujiang Hospital
[2] South China University of Technology,School of Mechanical and Automotive Engineering
[3] Shenzhen Polytechnic,School of Artificial Intelligence
[4] Southern Medical University,Rehabilitation Medical School
来源
关键词
Robot; Human–robot interaction; Force control; Reinforcement learning; Impedance control;
D O I
暂无
中图分类号
学科分类号
摘要
When a robot end-effector contacts human skin, it is difficult to adjust the contact force autonomously in an unknown environment. Therefore, a robot force control algorithm based on reinforcement learning with a state transition model is proposed. In this paper, the dynamic relationship between a robot end-effector and skin contact is established using an impedance control model. To solve the problem that the reference trajectory is difficult to obtain, a skin mechanical model is established to estimate the environmental boundary of impedance control. To address the problem that impedance control parameters are difficult to adjust, a reinforcement learning algorithm is constructed by combining a neural network and a cross-entropy method for control parameter search. The state transition model constructed using a BP neural network can be updated offline, accelerating the search for optimal control parameters, which optimizes the problem of slow reinforcement learning convergence. The uncertainty of the contact process is considered using a probabilistic statistics-based approach to strategy search. Experimental results show that the model-based reinforcement learning algorithm for force control can obtain a relatively smooth force compared to traditional PID algorithms, and the error is basically within ± 0.2 N during the online experiment.
引用
收藏
页码:509 / 519
页数:10
相关论文
共 50 条
  • [31] Research on robot constant force control of surface tracking based on reinforcement learning
    Zhang T.
    Xiao M.
    Zou Y.-B.
    Xiao J.-D.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2019, 53 (10): : 1865 - 1873and1882
  • [32] An Admittance Parameter Optimization Method Based on Reinforcement Learning for Robot Force Control
    Hu, Xiaoyi
    Liu, Gongping
    Ren, Peipei
    Jia, Bing
    Liang, Yiwen
    Li, Longxi
    Duan, Shilin
    ACTUATORS, 2024, 13 (09)
  • [33] Model-Based Iterative Learning Control for Industrial Robot Manipulators
    Yeon, Je Sung
    Park, Jong Hyeon
    Son, Seung-Woo
    Lee, Sang-Hun
    2009 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS ( ICAL 2009), VOLS 1-3, 2009, : 24 - +
  • [34] Model-Based Robot Learning Control with Uncertainty Directed Exploration
    Cao, Junjie
    Liu, Yong
    Yang, Jian
    Pan, Zaisheng
    2020 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2020, : 2004 - 2010
  • [35] Model-based Reinforcement Learning: A Survey
    Moerland, Thomas M.
    Broekens, Joost
    Plaat, Aske
    Jonker, Catholijn M.
    FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2023, 16 (01): : 1 - 118
  • [36] A survey on model-based reinforcement learning
    Fan-Ming LUO
    Tian XU
    Hang LAI
    Xiong-Hui CHEN
    Weinan ZHANG
    Yang YU
    Science China(Information Sciences), 2024, 67 (02) : 59 - 84
  • [37] Nonparametric model-based reinforcement learning
    Atkeson, CG
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 10, 1998, 10 : 1008 - 1014
  • [38] The ubiquity of model-based reinforcement learning
    Doll, Bradley B.
    Simon, Dylan A.
    Daw, Nathaniel D.
    CURRENT OPINION IN NEUROBIOLOGY, 2012, 22 (06) : 1075 - 1081
  • [39] Multiple model-based reinforcement learning
    Doya, K
    Samejima, K
    Katagiri, K
    Kawato, M
    NEURAL COMPUTATION, 2002, 14 (06) : 1347 - 1369
  • [40] A survey on model-based reinforcement learning
    Luo, Fan-Ming
    Xu, Tian
    Lai, Hang
    Chen, Xiong-Hui
    Zhang, Weinan
    Yu, Yang
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (02)