ARE-QL: an enhanced Q-learning algorithm with optimized search for mobile robot path planning

被引:0
|
作者
Zhang, Yunjie [1 ]
Liu, Yue [1 ]
Chen, Yadong [1 ]
Yang, Zhenjian [1 ]
机构
[1] Tianjin Chengjian Univ, Sch Comp & Informat Engn, Tianjin, Peoples R China
关键词
path planning; Q-learning; mobile robot; reinforcement learning; ant colony algorithm;
D O I
10.1088/1402-4896/adb79a
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
This paper addresses challenges in Q-learning for mobile robot path planning, specifically low learning efficiency and slow convergence. An ARE-QL algorithm with an optimized search range is proposed to address these issues. Firstly, the reward function of Q-learning is enhanced. A dynamic continuous reward mechanism, based on heuristic environmental information, is introduced to reduce the robot's search space and improve learning efficiency. Secondly, integrating the pheromone mechanism from the ant colony algorithm introduces a pheromone-guided matrix and path filtering, optimizing the search range and accelerating convergence to the optimal path. Additionally, an adaptive exploration strategy based on state familiarity enhances the algorithm's efficiency and robustness. Simulation results demonstrate that the ARE-QL algorithm outperforms standard Q-learning and other improved algorithms. It achieves faster convergence and higher path quality across various environmental complexities. The ARE-QL algorithm enhances path planning efficiency while demonstrating strong adaptability and robustness, providing new insights and solutions for mobile robot path planning research.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Path planning for autonomous mobile robot using transfer learning-based Q-learning
    Wu, Shengshuai
    Hu, Jinwen
    Zhao, Chunhui
    Pan, Quan
    PROCEEDINGS OF 2020 3RD INTERNATIONAL CONFERENCE ON UNMANNED SYSTEMS (ICUS), 2020, : 88 - 93
  • [22] Neural Q-learning in Motion Planning for Mobile Robot
    Qin, Zheng
    Gu, Jason
    2009 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS ( ICAL 2009), VOLS 1-3, 2009, : 1024 - 1028
  • [23] Application of artificial neural network based on Q-learning for mobile robot path planning
    Li, Caihong
    Zhang, Jingyuan
    Li, Yibin
    2006 IEEE INTERNATIONAL CONFERENCE ON INFORMATION ACQUISITION, VOLS 1 AND 2, CONFERENCE PROCEEDINGS, 2006, : 978 - 982
  • [24] Modified Q-learning with distance metric and virtual target on path planning of mobile robot
    Low, Ee Soong
    Ong, Pauline
    Low, Cheng Yee
    Omar, Rosli
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 199
  • [25] Path planning of a mobile robot in a free-space environment using Q-learning
    Jianxun Jiang
    Jianbin Xin
    Progress in Artificial Intelligence, 2019, 8 : 133 - 142
  • [26] Path planning of a mobile robot in a free-space environment using Q-learning
    Jiang, Jianxun
    Xin, Jianbin
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2019, 8 (01) : 133 - 142
  • [27] The Experience-Memory Q-Learning Algorithm for Robot Path Planning in Unknown Environment
    Zhao, Meng
    Lu, Hui
    Yang, Siyi
    Guo, Fengjuan
    IEEE ACCESS, 2020, 8 : 47824 - 47844
  • [28] A modified Q-learning algorithm for robot path planning in a digital twin assembly system
    Guo, Xiaowei
    Peng, Gongzhuang
    Meng, Yingying
    INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2022, 119 (5-6): : 3951 - 3961
  • [29] A modified Q-learning algorithm for robot path planning in a digital twin assembly system
    Xiaowei Guo
    Gongzhuang Peng
    Yingying Meng
    The International Journal of Advanced Manufacturing Technology, 2022, 119 : 3951 - 3961
  • [30] UAV path planning algorithm based on Deep Q-Learning to search for a lost in the ocean
    Boulares, Mehrez
    Fehri, Afef
    Jemni, Mohamed
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2024, 179