Deep Reinforcement Learning for Robotic Control with Multi-Fidelity Models

被引:0
|
作者
Leguizamo, David Felipe [1 ]
Yang, Hsin-Jung [1 ]
Lee, Xian Yeow [1 ]
Sarkar, Soumik [1 ]
机构
[1] Iowa State Univ, Ames, IA 50010 USA
来源
IFAC PAPERSONLINE | 2022年 / 55卷 / 37期
关键词
Robotic Systems; Self-learning Models; Real-time Artificial Intelligence; Reinforcement learning control; Engineering Applications of Artificial Intelligence; Optimization and Control; Multi-Fidelity Modeling;
D O I
10.1016/jifacol.2022.11.183
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep reinforcement learning (DRL) can be used for the development of robotic controllers. Complicated kinematic relationships can be learned by a DRL agent, which will result in a control policy that takes actions based on an observed state. However, a DRL agent typically goes through much trial and error before beginning to take appropriate actions. Therefore, it is often useful to leverage simulated robotic manipulators before performing any training or testing on actual hardware. There are several options for such simulation, ranging from simple kinematic models to more complex models seeking to accurately simulate the effects of gravity, inertia, and friction. The latter models can provide excellent representations of a robotic plant, but typically with a noticeably increased computational expense. Reducing the expense of simulating the robotic plant (while still maintaining a reasonable degree of accuracy) can accelerate an already expensive DRL training loop. In this work, we present a methodology for using a lower-fidelity model (based on Denavit-Hartenberg parameters) to initialize the training of a DRL agent for control of a Sawyer robotic arm. We show that the trained DRL policy can then be fine-tuned in a higher-fidelity simulation provided by the robot's manufacturer. We demonstrate the accuracy of the fully trained policy by transferring it to the actual hardware, demonstrating the power of DRL to learn complicated robotic tasks entirely in simulation. Finally, we benchmark the time required to train a policy using each level of fidelity. Copyright (c) 2022 The Authors. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
引用
收藏
页码:193 / 198
页数:6
相关论文
共 50 条
  • [1] Multi-fidelity reinforcement learning with control variates
    Khairy, Sami
    Balaprakash, Prasanna
    NEUROCOMPUTING, 2024, 597
  • [2] Reinforcement Learning with Multi-Fidelity Simulators
    Cutler, Mark
    Walsh, Thomas J.
    How, Jonathan P.
    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2014, : 3888 - 3895
  • [3] Multi-fidelity reinforcement learning framework for shape optimization
    Bhola, Sahil
    Pawar, Suraj
    Balaprakash, Prasanna
    Maulik, Romit
    JOURNAL OF COMPUTATIONAL PHYSICS, 2023, 482
  • [4] Leveraging deep reinforcement learning for design space exploration with multi-fidelity surrogate model
    Li, Haokun
    Wang, Ru
    Wang, Zuoxu
    Li, Guannan
    Wang, Guoxin
    Yan, Yan
    JOURNAL OF ENGINEERING DESIGN, 2024,
  • [5] Multi-fidelity models for model predictive control
    Kameswaran, Shiva
    Subrahmanya, Niranjan
    11TH INTERNATIONAL SYMPOSIUM ON PROCESS SYSTEMS ENGINEERING, PTS A AND B, 2012, 31 : 1627 - 1631
  • [6] MULTI-FIDELITY GENERATIVE DEEP LEARNING TURBULENT FLOWS
    Geneva, Nicholas
    Zabaras, Nicholas
    FOUNDATIONS OF DATA SCIENCE, 2020, 2 (04): : 391 - 428
  • [7] Multi-fidelity prediction of molecular optical peaks with deep learning
    Greenman, Kevin P.
    Green, William H.
    Gomez-Bombarelli, Rafael
    CHEMICAL SCIENCE, 2022, 13 (04) : 1152 - 1162
  • [8] Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning
    Lu, Chi-Ken
    Shafto, Patrick
    ENTROPY, 2021, 23 (11)
  • [9] Models and algorithms for multi-fidelity data
    Forbes, Alistair B.
    ADVANCED MATHEMATICAL AND COMPUTATIONAL TOOLS IN METROLOGY AND TESTING XI, 2019, 89 : 178 - 185
  • [10] Deep Multi-Fidelity Active Learning of High-Dimensional Outputs
    Li, Shibo
    Wang, Zheng
    Kirby, Robert M.
    Zhe, Shandian
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151