ARE-QL: an enhanced Q-learning algorithm with optimized search for mobile robot path planning

被引:0
|
作者
Zhang, Yunjie [1 ]
Liu, Yue [1 ]
Chen, Yadong [1 ]
Yang, Zhenjian [1 ]
机构
[1] Tianjin Chengjian Univ, Sch Comp & Informat Engn, Tianjin, Peoples R China
关键词
path planning; Q-learning; mobile robot; reinforcement learning; ant colony algorithm;
D O I
10.1088/1402-4896/adb79a
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
This paper addresses challenges in Q-learning for mobile robot path planning, specifically low learning efficiency and slow convergence. An ARE-QL algorithm with an optimized search range is proposed to address these issues. Firstly, the reward function of Q-learning is enhanced. A dynamic continuous reward mechanism, based on heuristic environmental information, is introduced to reduce the robot's search space and improve learning efficiency. Secondly, integrating the pheromone mechanism from the ant colony algorithm introduces a pheromone-guided matrix and path filtering, optimizing the search range and accelerating convergence to the optimal path. Additionally, an adaptive exploration strategy based on state familiarity enhances the algorithm's efficiency and robustness. Simulation results demonstrate that the ARE-QL algorithm outperforms standard Q-learning and other improved algorithms. It achieves faster convergence and higher path quality across various environmental complexities. The ARE-QL algorithm enhances path planning efficiency while demonstrating strong adaptability and robustness, providing new insights and solutions for mobile robot path planning research.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] An optimized Q-Learning algorithm for mobile robot local path planning
    Zhou, Qian
    Lian, Yang
    Wu, Jiayang
    Zhu, Mengyue
    Wang, Haiyong
    Cao, Jinli
    KNOWLEDGE-BASED SYSTEMS, 2024, 286
  • [2] Mobile robot path planning based on Q-learning algorithm
    Li, Shaochuan
    Wang, Xuiqing
    Hu, Liwei
    Liu, Ying
    2019 WORLD ROBOT CONFERENCE SYMPOSIUM ON ADVANCED ROBOTICS AND AUTOMATION (WRC SARA 2019), 2019, : 160 - 165
  • [3] Dynamic Path Planning of a Mobile Robot with Improved Q-Learning algorithm
    Li, Siding
    Xu, Xin
    Zuo, Lei
    2015 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION, 2015, : 409 - 414
  • [4] Extended Q-Learning Algorithm for Path-Planning of a Mobile Robot
    Goswami , Indrani
    Das, Pradipta Kumar
    Konar, Amit
    Janarthanan, R.
    SIMULATED EVOLUTION AND LEARNING, 2010, 6457 : 379 - +
  • [5] PATH PLANNING OF MOBILE ROBOT BASED ON THE IMPROVED Q-LEARNING ALGORITHM
    Chen, Chaorui
    Wang, Dongshu
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2022, 18 (03): : 687 - 702
  • [6] CLSQL: Improved Q-Learning Algorithm Based on Continuous Local Search Policy for Mobile Robot Path Planning
    Ma, Tian
    Lyu, Jiahao
    Yang, Jiayi
    Xi, Runtao
    Li, Yuancheng
    An, Jinpeng
    Li, Chao
    SENSORS, 2022, 22 (15)
  • [7] A deterministic improved Q-learning for path planning of a mobile robot
    1600, Institute of Electrical and Electronics Engineers Inc. (43):
  • [8] A Deterministic Improved Q-Learning for Path Planning of a Mobile Robot
    Konar, Amit
    Chakraborty, Indrani Goswami
    Singh, Sapam Jitu
    Jain, Lakhmi C.
    Nagar, Atulya K.
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2013, 43 (05): : 1141 - 1153
  • [9] Path planning for mobile robot based on improved ant colony Q-learning algorithm
    Cui, Mengru
    He, Maowei
    Chen, Hanning
    Liu, Kunpeng
    Hu, Yabao
    Zheng, Chen
    Wang, Xuliang
    INTERNATIONAL JOURNAL OF INTERACTIVE DESIGN AND MANUFACTURING - IJIDEM, 2025, 19 (04): : 3069 - 3087
  • [10] A Modified Q-learning Multi Robot Path Planning Algorithm
    Li, Bo
    Liang, Hongbin
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2020, 127 : 125 - 126