Q-learning based univector field navigation method for mobile robots

被引:2
|
作者
Vien, Ngo Anh [1 ]
Viet, Nguyen Hoang [1 ]
Park, HyunJeong [1 ]
Lee, SeungGwan
Chung, TaeChoong [1 ]
机构
[1] Kyung Hee Univ, Sch Elect & Informat, Dept Comp Engn, Artificial Intelligence Lab, Seoul, South Korea
关键词
reinforcement learning; Q-learning; double; action Q-learning; navigation; mobile robots; univector field;
D O I
10.1007/978-1-4020-6264-3_80
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, the Q-Learning based univector field method is proposed for mobile robot to accomplish the obstacle avoidance and the robot orientation at the target position. Univector field method guarantees the desired posture of the robot at the target position. But it does not navigate the robot to avoid obstacles. To solve this problem, modified univector field is used and trained by Q-learning. When the robot following the field to get the desired posture collides with obstacles, univector fields at collision positions are modified according to the reinforcement of Q-learning algorithm. With this proposed navigation method, robot navigation task in a dynamically changing environment becomes easier by using double action Q-learning [8] to train univector field instead of ordinary Q-learning. Computer simulations and experimental results are carried out for an obstacle avoidance mobile robot to demonstrate the effectiveness of the proposed scheme.
引用
收藏
页码:463 / +
页数:2
相关论文
共 50 条
  • [21] Application of Deep Q-Learning for Wheel Mobile Robot Navigation
    Mohanty, Prases K.
    Sah, Arun Kumar
    Kumar, Vikas
    Kundu, Shubhasri
    2017 3RD INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND NETWORKS (CINE), 2017, : 88 - 93
  • [22] Q-learning-based navigation for mobile robots in continuous and dynamic environments
    Maoudj, Abderraouf
    Christensen, Anders Lyhne
    2021 IEEE 17TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2021, : 1338 - 1345
  • [23] A navigation method for mobile robots using interval type-2 fuzzy neural network fitting Q-learning in unknown environments
    Yi, Zeren
    Li, Guojin
    Chen, Shuang
    Xie, Wei
    Xu, Bugong
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2019, 37 (01) : 1113 - 1121
  • [24] Q-learning based method of adaptive path planning for mobile robot
    Li, Yibin
    Li, Caihong
    Zhang, Zijian
    2006 IEEE INTERNATIONAL CONFERENCE ON INFORMATION ACQUISITION, VOLS 1 AND 2, CONFERENCE PROCEEDINGS, 2006, : 983 - 987
  • [25] A new mobile robot navigation method using fuzzy logic and a modified Q-learning algorithm
    Boubertakh, H.
    Tadjine, M.
    Glorennec, P. -Y.
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2010, 21 (1-2) : 113 - 119
  • [26] Incremental Q-learning strategy for adaptive PID control of mobile robots
    Carlucho, Ignacio
    De Paula, Mariano
    Villar, Sebastian A.
    Acosta, Gerardo G.
    EXPERT SYSTEMS WITH APPLICATIONS, 2017, 80 : 183 - 199
  • [27] Model-based Q-Learning for Humanoid Robots
    Le, Than D.
    Le, An T.
    Nguyen, Duy T.
    2017 18TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2017, : 608 - 613
  • [28] Topological Q-learning with internally guided exploration for mobile robot navigation
    Hafez, Muhammad Burhan
    Loo, Chu Kiong
    NEURAL COMPUTING & APPLICATIONS, 2015, 26 (08): : 1939 - 1954
  • [29] An improved Q-learning algorithm for an autonomous mobile robot navigation problem
    Muhammad, Jawad
    Bucak, Ihsan Omur
    2013 INTERNATIONAL CONFERENCE ON TECHNOLOGICAL ADVANCES IN ELECTRICAL, ELECTRONICS AND COMPUTER ENGINEERING (TAEECE), 2013, : 239 - 243
  • [30] Reactive fuzzy controller design by Q-learning for mobile robot navigation
    张文志
    吕恬生
    Journal of Harbin Institute of Technology, 2005, (03) : 319 - 324