An Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles

被引:14
|
作者
Zhu, Zhengwei [1 ]
Hu, Can [1 ]
Zhu, Chenyang [2 ]
Zhu, Yanping [1 ]
Sheng, Yu [1 ]
机构
[1] Changzhou Univ, Sch Microelect & Control Engn, Changzhou 213164, Jiangsu, Peoples R China
[2] Changzhou Univ, Sch Comp Sci & Artificial Intelligence, Changzhou 213164, Jiangsu, Peoples R China
关键词
deep reinforcement learning; unmanned surface vehicle; path planning; algorithm optimization; fusion and integration;
D O I
10.3390/jmse9111267
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
Unmanned Surface Vehicle (USV) has a broad application prospect and autonomous path planning as its crucial technology has developed into a hot research direction in the field of USV research. This paper proposes an Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay (IPD3QN) to address the slow and unstable convergence of traditional Deep Q Network (DQN) algorithms in autonomous path planning of USV. Firstly, we use the deep double Q-Network to decouple the selection and calculation of the target Q value action to eliminate overestimation. The prioritized experience replay method is adopted to extract experience samples from the experience replay unit, increase the utilization rate of actual samples, and accelerate the training speed of the neural network. Then, the neural network is optimized by introducing a dueling network structure. Finally, the soft update method is used to improve the stability of the algorithm, and the dynamic epsilon-greedy method is used to find the optimal strategy. The experiments are first conducted in the Open AI Gym test platform to pre-validate the algorithm for two classical control problems: the Cart pole and Mountain Car problems. The impact of algorithm hyperparameters on the model performance is analyzed in detail. The algorithm is then validated in the Maze environment. The comparative analysis of simulation experiments shows that IPD3QN has a significant improvement in learning performance regarding convergence speed and convergence stability compared with DQN, D3QN, PD2QN, PDQN, PD3QN. Also, USV can plan the optimal path according to the actual navigation environment with the IPD3QN algorithm.
引用
收藏
页数:15
相关论文
共 50 条
  • [11] DeepSensing: A Novel Mobile Crowdsensing Framework With Double Deep Q-Network and Prioritized Experience Replay
    Tao, Xi
    Hafid, Abdelhakim Senhaji
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (12) : 11547 - 11558
  • [12] Prioritized Experience Replay-Based Path Planning Algorithm for Multiple UAVs
    Ren, Chongde
    Chen, Jinchao
    Du, Chenglie
    INTERNATIONAL JOURNAL OF AEROSPACE ENGINEERING, 2024, 2024
  • [13] Path Planning Method of Unmanned Surface Vehicles Formation Based on Improved A* Algorithm
    Sang, Tongtong
    Xiao, Jinchao
    Xiong, Junfeng
    Xia, Haoyun
    Wang, Zhongze
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2023, 11 (01)
  • [14] Noisy Dueling Double Deep Q-Network algorithm for autonomous underwater vehicle path planning
    Liao, Xu
    Li, Le
    Huang, Chuangxia
    Zhao, Xian
    Tan, Shumin
    FRONTIERS IN NEUROROBOTICS, 2024, 18
  • [15] Path planning for unmanned vehicle reconnaissance based on deep Q-network
    Xia, Yuqi
    Huang, Yanyan
    Chen, Qia
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2024, 46 (09): : 3070 - 3081
  • [16] Path planning of mobile robot based on improved double deep Q-network algorithm
    Wang, Zhenggang
    Song, Shuhong
    Cheng, Shenghui
    FRONTIERS IN NEUROROBOTICS, 2025, 19
  • [17] Three-Dimensional Path Planning for Unmanned Helicopter Using Memory-Enhanced Dueling Deep Q Network
    Yao, Jiangyi
    Li, Xiongwei
    Zhang, Yang
    Ji, Jingyu
    Wang, Yanchao
    Zhang, Danyang
    Liu, Yicen
    AEROSPACE, 2022, 9 (08)
  • [18] Double Deep Q-Learning With Prioritized Experience Replay for Anomaly Detection in Smart Environments
    Fahrmann, Daniel
    Jorek, Nils
    Damer, Naser
    Kirchbuchner, Florian
    Kuijper, Arjan
    IEEE ACCESS, 2022, 10 : 60836 - 60848
  • [19] Double Broad Reinforcement Learning Based on Hindsight Experience Replay for Collision Avoidance of Unmanned Surface Vehicles
    Yu, Jiabao
    Chen, Jiawei
    Chen, Ying
    Zhou, Zhiguo
    Duan, Junwei
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2022, 10 (12)
  • [20] Path planning for unmanned surface vehicle based on improved Q-Learning algorithm
    Wang, Yuanhui
    Lu, Changzhou
    Wu, Peng
    Zhang, Xiaoyue
    OCEAN ENGINEERING, 2024, 292