Q-learning based univector field navigation method for mobile robots

被引:2
|
作者
Vien, Ngo Anh [1 ]
Viet, Nguyen Hoang [1 ]
Park, HyunJeong [1 ]
Lee, SeungGwan
Chung, TaeChoong [1 ]
机构
[1] Kyung Hee Univ, Sch Elect & Informat, Dept Comp Engn, Artificial Intelligence Lab, Seoul, South Korea
关键词
reinforcement learning; Q-learning; double; action Q-learning; navigation; mobile robots; univector field;
D O I
10.1007/978-1-4020-6264-3_80
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, the Q-Learning based univector field method is proposed for mobile robot to accomplish the obstacle avoidance and the robot orientation at the target position. Univector field method guarantees the desired posture of the robot at the target position. But it does not navigate the robot to avoid obstacles. To solve this problem, modified univector field is used and trained by Q-learning. When the robot following the field to get the desired posture collides with obstacles, univector fields at collision positions are modified according to the reinforcement of Q-learning algorithm. With this proposed navigation method, robot navigation task in a dynamically changing environment becomes easier by using double action Q-learning [8] to train univector field instead of ordinary Q-learning. Computer simulations and experimental results are carried out for an obstacle avoidance mobile robot to demonstrate the effectiveness of the proposed scheme.
引用
收藏
页码:463 / +
页数:2
相关论文
共 50 条
  • [31] Reactive fuzzy controller design by Q-learning for mobile robot navigation
    Zhang, Wen-Zhi
    Lu, Tian-Sheng
    Journal of Harbin Institute of Technology (New Series), 2005, 12 (03) : 319 - 324
  • [32] Topological Q-learning with internally guided exploration for mobile robot navigation
    Muhammad Burhan Hafez
    Chu Kiong Loo
    Neural Computing and Applications, 2015, 26 : 1939 - 1954
  • [33] Based on A* and q-learning search and rescue robot navigation
    Pang, Tao
    Ruan, Xiaogang
    Wang, Ershen
    Fan, Ruiyuan
    Telkomnika - Indonesian Journal of Electrical Engineering, 2012, 10 (07): : 1889 - 1896
  • [34] Study on motion forms of mobile robots generated by Q-Learning process based on reward databases
    Hara, Masayuki
    Inoue, Masashi
    Motoyama, Haruhisa
    Huang, Jian
    Yabuta, Tetsuro
    2006 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-6, PROCEEDINGS, 2006, : 5112 - +
  • [35] Mobile robot navigation based on improved CA-CMAC and Q-learning in dynamic environment
    Li Guo-jin
    Chen Shuang
    Xiao Zhu-li
    Dong Di-yong
    2015 34TH CHINESE CONTROL CONFERENCE (CCC), 2015, : 5020 - 5024
  • [36] Univector Field Method Based Multi-robot Navigation for Pursuit Problem
    Hoang Huu Viet
    An, Sang Hyeok
    Chung, TaeChoong
    ADVANCES IN COLLECTIVE INTELLIGENCE 2011, 2012, 113 : 131 - 143
  • [37] A Dynamic Building Method of Mobile Agent Migrating Path Based on Q-Learning
    Cheng, Yuan-cai
    Wang, Xiao-lin
    2014 IEEE 7TH JOINT INTERNATIONAL INFORMATION TECHNOLOGY AND ARTIFICIAL INTELLIGENCE CONFERENCE (ITAIC), 2014, : 270 - 275
  • [38] Reinforcement Learning based Method for Autonomous Navigation of Mobile Robots in Unknown Environments
    Roan Van Hoa
    Tran Duc Chuyen
    Nguyen Tung Lam
    Tran Ngoc Son
    Nguyen Duc Dien
    Vu Thi To Linh
    2020 INTERNATIONAL CONFERENCE ON ADVANCED MECHATRONIC SYSTEMS (ICAMECHS), 2020, : 266 - 269
  • [39] Q-Learning for autonomous vehicle navigation
    Gonzalez-Miranda, Oscar
    Miranda, Luis Antonio Lopez
    Ibarra-Zannatha, Juan Manuel
    2023 XXV ROBOTICS MEXICAN CONGRESS, COMROB, 2023, : 138 - 142
  • [40] A path planning approach for mobile robots using short and safe Q-learning
    Du, He
    Hao, Bing
    Zhao, Jianshuo
    Zhang, Jiamin
    Wang, Qi
    Yuan, Qi
    PLOS ONE, 2022, 17 (09):