Q-learning based univector field navigation method for mobile robots

被引:2
|
作者
Vien, Ngo Anh [1 ]
Viet, Nguyen Hoang [1 ]
Park, HyunJeong [1 ]
Lee, SeungGwan
Chung, TaeChoong [1 ]
机构
[1] Kyung Hee Univ, Sch Elect & Informat, Dept Comp Engn, Artificial Intelligence Lab, Seoul, South Korea
关键词
reinforcement learning; Q-learning; double; action Q-learning; navigation; mobile robots; univector field;
D O I
10.1007/978-1-4020-6264-3_80
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, the Q-Learning based univector field method is proposed for mobile robot to accomplish the obstacle avoidance and the robot orientation at the target position. Univector field method guarantees the desired posture of the robot at the target position. But it does not navigate the robot to avoid obstacles. To solve this problem, modified univector field is used and trained by Q-learning. When the robot following the field to get the desired posture collides with obstacles, univector fields at collision positions are modified according to the reinforcement of Q-learning algorithm. With this proposed navigation method, robot navigation task in a dynamically changing environment becomes easier by using double action Q-learning [8] to train univector field instead of ordinary Q-learning. Computer simulations and experimental results are carried out for an obstacle avoidance mobile robot to demonstrate the effectiveness of the proposed scheme.
引用
收藏
页码:463 / +
页数:2
相关论文
共 50 条
  • [11] Path planning of mobile robots with Q-learning
    Cetin, Halil
    Durdu, Akif
    2014 22ND SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2014, : 2162 - 2165
  • [12] Mobile robot navigation: neural Q-learning
    Parasuraman, S.
    Yun, Soh Chin
    INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2012, 44 (04) : 303 - 311
  • [13] Mobile Robot Navigation: Neural Q-Learning
    Yun, Soh Chin
    Parasuraman, S.
    Ganapathy, V.
    ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY, VOL 3, 2013, 178 : 259 - +
  • [14] An efficient initialization approach of Q-learning for mobile robots
    Yong Song
    Yi-bin Li
    Cai-hong Li
    Gui-fang Zhang
    International Journal of Control, Automation and Systems, 2012, 10 : 166 - 172
  • [15] An Efficient Initialization Approach of Q-learning for Mobile Robots
    Song, Yong
    Li, Yi-bin
    Li, Cai-hong
    Zhang, Gui-fang
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2012, 10 (01) : 166 - 172
  • [16] Distributed lazy Q-learning for cooperative mobile robots
    Touzet, Claude F.
    International Journal of Advanced Robotic Systems, 2004, 1 (01) : 5 - 13
  • [17] Dynamic fuzzy Q-Learning and control of mobile robots
    Deng, C
    Er, MJ
    Xu, J
    2004 8TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION, VOLS 1-3, 2004, : 2336 - 2341
  • [18] Mobile robot navigation using neural Q-learning
    Yang, GS
    Chen, EK
    An, CW
    PROCEEDINGS OF THE 2004 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2004, : 48 - 52
  • [19] Optimal path planning approach based on Q-learning algorithm for mobile robots
    Maoudj, Abderraouf
    Hentout, Abdelfetah
    APPLIED SOFT COMPUTING, 2020, 97
  • [20] Learning Motion Policy for Mobile Robots using Deep Q-Learning
    Kwak, Nosan
    Yoon, Sukjune
    Roh, Kyungshik
    PROCEEDINGS 2017 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI), 2017, : 805 - 810