ARE-QL: an enhanced Q-learning algorithm with optimized search for mobile robot path planning

被引:0
|
作者
Zhang, Yunjie [1 ]
Liu, Yue [1 ]
Chen, Yadong [1 ]
Yang, Zhenjian [1 ]
机构
[1] Tianjin Chengjian Univ, Sch Comp & Informat Engn, Tianjin, Peoples R China
关键词
path planning; Q-learning; mobile robot; reinforcement learning; ant colony algorithm;
D O I
10.1088/1402-4896/adb79a
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
This paper addresses challenges in Q-learning for mobile robot path planning, specifically low learning efficiency and slow convergence. An ARE-QL algorithm with an optimized search range is proposed to address these issues. Firstly, the reward function of Q-learning is enhanced. A dynamic continuous reward mechanism, based on heuristic environmental information, is introduced to reduce the robot's search space and improve learning efficiency. Secondly, integrating the pheromone mechanism from the ant colony algorithm introduces a pheromone-guided matrix and path filtering, optimizing the search range and accelerating convergence to the optimal path. Additionally, an adaptive exploration strategy based on state familiarity enhances the algorithm's efficiency and robustness. Simulation results demonstrate that the ARE-QL algorithm outperforms standard Q-learning and other improved algorithms. It achieves faster convergence and higher path quality across various environmental complexities. The ARE-QL algorithm enhances path planning efficiency while demonstrating strong adaptability and robustness, providing new insights and solutions for mobile robot path planning research.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Predator-Prey Reward Based Q-Learning Coverage Path Planning for Mobile Robot
    Zhang, Meiyan
    Cai, Wenyu
    Pang, Lingfeng
    IEEE ACCESS, 2023, 11 : 29673 - 29683
  • [32] Mobile Robot Path Planning using Q-Learning with Guided Distance and Moving Target Concept
    Low, Ee Soong
    Ong, Pauline
    Low, Cheng Yee
    INTERNATIONAL JOURNAL OF INTEGRATED ENGINEERING, 2021, 13 (02): : 177 - 188
  • [33] An Optimized Q-Learning Algorithm Based on the Thinking of Tabu Search
    Zhang, Xiaogang
    Liu, Zhijing
    PROCEEDINGS OF THE 2008 INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN, VOL 1, 2008, : 533 - 536
  • [34] ETQ-learning: an improved Q-learning algorithm for path planning
    Wang, Huanwei
    Jing, Jing
    Wang, Qianlv
    He, Hongqi
    Qi, Xuyan
    Lou, Rui
    INTELLIGENT SERVICE ROBOTICS, 2024, 17 (04) : 915 - 929
  • [35] A Path Planning Algorithm for Space Manipulator Based on Q-Learning
    Li, Taiguo
    Li, Quanhong
    Li, Wenxi
    Xia, Jiagao
    Tang, Wenhua
    Wang, Weiwen
    PROCEEDINGS OF 2019 IEEE 8TH JOINT INTERNATIONAL INFORMATION TECHNOLOGY AND ARTIFICIAL INTELLIGENCE CONFERENCE (ITAIC 2019), 2019, : 1566 - 1571
  • [36] Coverage Path Planning Optimization Based on Q-Learning Algorithm
    Piardi, Luis
    Lima, Jose
    Pereira, Ana, I
    Costa, Paulo
    INTERNATIONAL CONFERENCE ON NUMERICAL ANALYSIS AND APPLIED MATHEMATICS (ICNAAM-2018), 2019, 2116
  • [37] Behavior Control Algorithm for Mobile Robot Based on Q-Learning
    Yang, Shiqiang
    Li, Congxiao
    2017 INTERNATIONAL CONFERENCE ON COMPUTER NETWORK, ELECTRONIC AND AUTOMATION (ICCNEA), 2017, : 45 - 48
  • [38] A Path Planning Algorithm for UAV Based on Improved Q-Learning
    Yan, Chao
    Xiang, Xiaojia
    2018 2ND INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION SCIENCES (ICRAS), 2018, : 46 - 50
  • [39] Research on Local Path Planning for the Mobile Robot Based on QL-anfis Algorithm
    Song, Li
    Li, Dazi
    Sun, Zhi
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 141 - 148
  • [40] Q-learning based Collision-free and Optimal Path Planning for Mobile Robot in Dynamic Environment
    Lin, Jing-Kai
    Ho, Shi-Lin
    Chou, Kuan-Yu
    Chen, Yon-Ping
    2022 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN, IEEE ICCE-TW 2022, 2022, : 427 - 428