Q-learning based univector field navigation method for mobile robots

被引:2
|
作者
Vien, Ngo Anh [1 ]
Viet, Nguyen Hoang [1 ]
Park, HyunJeong [1 ]
Lee, SeungGwan
Chung, TaeChoong [1 ]
机构
[1] Kyung Hee Univ, Sch Elect & Informat, Dept Comp Engn, Artificial Intelligence Lab, Seoul, South Korea
关键词
reinforcement learning; Q-learning; double; action Q-learning; navigation; mobile robots; univector field;
D O I
10.1007/978-1-4020-6264-3_80
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, the Q-Learning based univector field method is proposed for mobile robot to accomplish the obstacle avoidance and the robot orientation at the target position. Univector field method guarantees the desired posture of the robot at the target position. But it does not navigate the robot to avoid obstacles. To solve this problem, modified univector field is used and trained by Q-learning. When the robot following the field to get the desired posture collides with obstacles, univector fields at collision positions are modified according to the reinforcement of Q-learning algorithm. With this proposed navigation method, robot navigation task in a dynamically changing environment becomes easier by using double action Q-learning [8] to train univector field instead of ordinary Q-learning. Computer simulations and experimental results are carried out for an obstacle avoidance mobile robot to demonstrate the effectiveness of the proposed scheme.
引用
收藏
页码:463 / +
页数:2
相关论文
共 50 条
  • [41] Fuzzy A* quantum multi-stage Q-learning artificial potential field for path planning of mobile robots
    Hu, Likun
    Wei, Chunyou
    Yin, Linfei
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 141
  • [42] Kicking Motion Design of Humanoid Robots Using Gradual Accumulation Learning Method Based on Q-learning
    Wang, Jiawen
    Liang, Zhiwei
    Zhou, Zixuan
    Zhang, Yunfei
    PROCEEDINGS OF THE 28TH CHINESE CONTROL AND DECISION CONFERENCE (2016 CCDC), 2016, : 5274 - 5279
  • [43] Building A Socially Acceptable Navigation and Behavior of A Mobile Robot Using Q-Learning
    Dewantara, Bima Sena Bayu
    2016 INTERNATIONAL CONFERENCE ON KNOWLEDGE CREATION AND INTELLIGENT COMPUTING (KCIC), 2016, : 88 - 93
  • [44] Designing a Mobile Manipulator and Motion Planning for Autonomous Navigation with A* and Q-learning Algorithms
    Kaimujjaman, Md
    Nishi, Tatsushi
    Fujiwara, Tomofumi
    Liu, Ziang
    PROCEEDINGS OF 2024 INTERNATIONAL SYMPOSIUM ON FLEXIBLE AUTOMATION, ISFA 2024, 2024,
  • [45] Comparison of Deep Q-Learning, Q-Learning and SARSA Reinforced Learning for Robot Local Navigation
    Anas, Hafiq
    Ong, Wee Hong
    Malik, Owais Ahmed
    ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS 6, 2022, 429 : 443 - 454
  • [46] RSS-Based Q-Learning for Indoor UAV Navigation
    Chowdhury, Md Moin Uddin
    Erden, Fatih
    Guvenc, Ismail
    MILCOM 2019 - 2019 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2019,
  • [47] A Path-Planning Approach Based on Potential and Dynamic Q-Learning for Mobile Robots in Unknown Environment
    Hao, Bing
    Du, He
    Zhao, Jianshuo
    Zhang, Jiamin
    Wang, Qi
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [48] A self-learning reactive navigation method for mobile robots
    Xu, X
    Wang, XN
    He, HG
    2003 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-5, PROCEEDINGS, 2003, : 2384 - 2388
  • [49] A KNOWLEDGE BASED NAVIGATION METHOD FOR AUTONOMOUS MOBILE ROBOTS
    FREYBERGER, F
    KAMPMANN, P
    SCHMIDT, G
    ROBOTERSYSTEME, 1986, 2 (03): : 149 - 161
  • [50] Experimental Research on Avoidance Obstacle Control for Mobile Robots Using Q-Learning (QL) and Deep Q-Learning (DQL) Algorithms in Dynamic Environments
    Ha, Vo Thanh
    Vinh, Vo Quang
    ACTUATORS, 2024, 13 (01)