Comparison of multiple reinforcement learning and deep reinforcement learning methods for the task aimed at achieving the goal

被引:1
|
作者
Parak R. [1 ]
Matousek R. [1 ]
机构
[1] Institute of Automation and Computer Science, Brno University of Technology
关键词
Bézier spline; Deep neural network; Motion planning; Reinforcement Learning; Robotics; UR3;
D O I
10.13164/mendel.2021.1.001
中图分类号
学科分类号
摘要
Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL) methods are a promising approach to solving complex tasks in the real world with physi-cal robots. In this paper, we compare several reinforcement learning (Q-Learning, SARSA) and deep reinforcement learning (Deep Q-Network, Deep Sarsa) methods for a task aimed at achieving a goal using robotics arm UR3. The main optimization problem of this experiment is to find the best solution for each RL/DRL scenario, respectively, minimize the Euclidean distance accuracy error and smooth the resulting path by the Bézier spline method. The simulation and real word application are controlled by the Robot Operating System (ROS). The learning environment is implemented using the OpenAI Gym library, which uses the RVIZ simulation tool and the Gazebo 3D modeling tool for dynamics and kinematics. © 2021, Brno University of Technology. All rights reserved.
引用
收藏
页码:1 / 8
页数:7
相关论文
共 50 条
  • [21] Multi-Task Deep Reinforcement Learning with PopArt
    Hessel, Matteo
    Soyer, Hubert
    Espeholt, Lasse
    Czarnecki, Wojciech
    Schmitt, Simon
    van Hasselt, Hado
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3796 - 3803
  • [22] Deep Reinforcement Learning for Task Offloading in Edge Computing
    Xie, Bo
    Cui, Haixia
    2024 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND INTELLIGENT SYSTEMS ENGINEERING, MLISE 2024, 2024, : 250 - 254
  • [23] Addressing the Task of Rocket Recycling with Deep Reinforcement Learning
    Jin, Guangyin
    Huang, Jincai
    Feng, Yanghe
    Cheng, Guangquan
    Liu, Zhong
    Wang, Qi
    PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY: IOT AND SMART CITY (ICIT 2018), 2018, : 284 - 290
  • [24] Multiple Target Prediction for Deep Reinforcement Learning
    Chien, Jen-Tzung
    Hung, Po-Yen
    2020 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2020, : 1611 - 1616
  • [25] Proximal Curriculum with Task Correlations for Deep Reinforcement Learning
    Tzannetos, Georgios
    Kamalaruban, Parameswaran
    Singlai, Adish
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 5027 - 5036
  • [26] The Impact of Task Underspecification in Evaluating Deep Reinforcement Learning
    Jayawardana, Vindula
    Tang, Catherine
    Li, Sirui
    Suo, Dajiang
    Wu, Cathy
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [27] GRIMGEP: Learning Progress for Robust Goal Sampling in Visual Deep Reinforcement Learning
    Kovac, Grgur
    Laversanne-Finot, Adrien
    Oudeyer, Pierre-Yves
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (03) : 1396 - 1407
  • [28] Task Scheduling in Cloud Using Deep Reinforcement Learning
    Swarup, Shashank
    Shakshuki, Elhadi M.
    Yasar, Ansar
    12TH INTERNATIONAL CONFERENCE ON AMBIENT SYSTEMS, NETWORKS AND TECHNOLOGIES (ANT) / THE 4TH INTERNATIONAL CONFERENCE ON EMERGING DATA AND INDUSTRY 4.0 (EDI40) / AFFILIATED WORKSHOPS, 2021, 184 : 42 - 51
  • [29] Task Planning in "Block World" with Deep Reinforcement Learning
    Ayunts, Edward
    Panov, Alekasndr I.
    BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES (BICA) FOR YOUNG SCIENTISTS, 2018, 636 : 3 - 9
  • [30] Task Allocation for Mobile Crowdsensing with Deep Reinforcement Learning
    Tao, Xi
    Song, Wei
    2020 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2020,