An Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles

被引:14
|
作者
Zhu, Zhengwei [1 ]
Hu, Can [1 ]
Zhu, Chenyang [2 ]
Zhu, Yanping [1 ]
Sheng, Yu [1 ]
机构
[1] Changzhou Univ, Sch Microelect & Control Engn, Changzhou 213164, Jiangsu, Peoples R China
[2] Changzhou Univ, Sch Comp Sci & Artificial Intelligence, Changzhou 213164, Jiangsu, Peoples R China
关键词
deep reinforcement learning; unmanned surface vehicle; path planning; algorithm optimization; fusion and integration;
D O I
10.3390/jmse9111267
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
Unmanned Surface Vehicle (USV) has a broad application prospect and autonomous path planning as its crucial technology has developed into a hot research direction in the field of USV research. This paper proposes an Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay (IPD3QN) to address the slow and unstable convergence of traditional Deep Q Network (DQN) algorithms in autonomous path planning of USV. Firstly, we use the deep double Q-Network to decouple the selection and calculation of the target Q value action to eliminate overestimation. The prioritized experience replay method is adopted to extract experience samples from the experience replay unit, increase the utilization rate of actual samples, and accelerate the training speed of the neural network. Then, the neural network is optimized by introducing a dueling network structure. Finally, the soft update method is used to improve the stability of the algorithm, and the dynamic epsilon-greedy method is used to find the optimal strategy. The experiments are first conducted in the Open AI Gym test platform to pre-validate the algorithm for two classical control problems: the Cart pole and Mountain Car problems. The impact of algorithm hyperparameters on the model performance is analyzed in detail. The algorithm is then validated in the Maze environment. The comparative analysis of simulation experiments shows that IPD3QN has a significant improvement in learning performance regarding convergence speed and convergence stability compared with DQN, D3QN, PD2QN, PDQN, PD3QN. Also, USV can plan the optimal path according to the actual navigation environment with the IPD3QN algorithm.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] The Path Planning for Unmanned Ship Based on the Prioritized Experience Replay of Deep Q-networks
    Wen, Jiayi
    Huang, Zhijian
    Zhang, Guichen
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2020, 126 : 128 - 129
  • [2] Dynamic path planning via Dueling Double Deep Q-Network (D3QN) with prioritized experience replay
    Gok, Mehmet
    APPLIED SOFT COMPUTING, 2024, 158
  • [3] ViZDoom: DRQN with Prioritized Experience Replay, Double-Q Learning and Snapshot Ensembling
    Schulze, Christopher
    Schulze, Marcus
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 1, 2019, 868 : 1 - 17
  • [4] Mobile Robot Navigation Based on Noisy N-Step Dueling Double Deep Q-Network and Prioritized Experience Replay
    Hu, Wenjie
    Zhou, Ye
    Ho, Hann Woei
    ELECTRONICS, 2024, 13 (12)
  • [5] Robot Dynamic Path Planning Based on Prioritized Experience Replay and LSTM Network
    Li, Hongqi
    Zhong, Peisi
    Liu, Li
    Wang, Xiao
    Liu, Mei
    Yuan, Jie
    IEEE ACCESS, 2025, 13 : 22283 - 22299
  • [6] Transport robot path planning based on an advantage dueling double deep Q-network
    He Q.
    Wang Q.
    Li J.
    Wang Z.
    Wang T.
    Qinghua Daxue Xuebao/Journal of Tsinghua University, 2022, 62 (11): : 1751 - 1757
  • [7] Deep Deterministic Policy Gradient Based on Double Network Prioritized Experience Replay
    Kang, Chaohai
    Rong, Chuiting
    Ren, Weijian
    Huo, Fengcai
    Liu, Pengyun
    IEEE ACCESS, 2021, 9 : 60296 - 60308
  • [8] A cooperative EV charging scheduling strategy based on double deep Q-network and Prioritized experience replay
    Zhang, Yanyu
    Rao, Xinpeng
    Liu, Chunyang
    Zhang, Xibeng
    Zhou, Yi
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 118
  • [9] A Path Planning Method Based on Deep Reinforcement Learning with Improved Prioritized Experience Replay for Human-Robot Collaboration
    Sun, Deyu
    Wen, Jingqian
    Wang, Jingfei
    Yang, Xiaonan
    Hu, Yaoguang
    HUMAN-COMPUTER INTERACTION, PT II, HCI 2024, 2024, 14685 : 196 - 206
  • [10] Asynchronous Deep Q-network in Continuous Environment Based on Prioritized Experience Replay
    Liu, Hongda
    Zhang, Hanqi
    Gong, Linying
    2019 2ND INTERNATIONAL CONFERENCE ON MECHANICAL ENGINEERING, INDUSTRIAL MATERIALS AND INDUSTRIAL ELECTRONICS (MEIMIE 2019), 2019, : 472 - 477