USVs Path Planning for Maritime Search and Rescue Based on POS-DQN: Probability of Success-Deep Q-Network

被引:0
|
作者
Liu, Lu [1 ]
Shan, Qihe [1 ]
Xu, Qi [2 ]
机构
[1] Dalian Maritime Univ, Nav Coll, Dalian 116026, Peoples R China
[2] Zhejiang Lab, Res Inst Intelligent Networks, Hangzhou 311121, Peoples R China
基金
中国国家自然科学基金;
关键词
SAR; task allocation; deep reinforcement learning;
D O I
10.3390/jmse12071158
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
Efficient maritime search and rescue (SAR) is crucial for responding to maritime emergencies. In traditional SAR, fixed search path planning is inefficient and cannot prioritize high-probability regions, which has significant limitations. To solve the above problems, this paper proposes unmanned surface vehicles (USVs) path planning for maritime SAR based on POS-DQN so that USVs can perform SAR tasks reasonably and efficiently. Firstly, the search region is allocated as a whole using an improved task allocation algorithm so that the task region of each USV has priority and no duplication. Secondly, this paper considers the probability of success (POS) of the search environment and proposes a POS-DQN algorithm based on deep reinforcement learning. This algorithm can adapt to the complex and changing environment of SAR. It designs a probability weight reward function and trains USV agents to obtain the optimal search path. Finally, based on the simulation results, by considering the complete coverage of obstacle avoidance and collision avoidance, the search path using this algorithm can prioritize high-probability regions and improve the efficiency of SAR.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] A Continuous Space Path Planning Method for Unmanned Aerial Vehicle Based on Particle Swarm Optimization-Enhanced Deep Q-Network
    Han, Le
    Zhang, Hui
    An, Nan
    DRONES, 2025, 9 (02)
  • [32] Multi-Agent Path Planning Method Based on Improved Deep Q-Network in Dynamic Environments; [动态环境下基于改进DQN的多智能体路径规划方法]
    Li S.
    Li M.
    Jing Z.
    Journal of Shanghai Jiaotong University (Science), 2024, 29 (04) : 601 - 612
  • [33] Dynamic collision avoidance for maritime autonomous surface ships based on deep Q-network with velocity obstacle method
    Li, Yuqin
    Wu, Defeng
    Wang, Hongdong
    Lou, Jiankun
    OCEAN ENGINEERING, 2025, 320
  • [34] Path planning of mobile robot in unknown dynamic continuous environment using reward-modified deep Q-network
    Huang, Runnan
    Qin, Chengxuan
    Li, Jian Ling
    Lan, Xuejing
    OPTIMAL CONTROL APPLICATIONS & METHODS, 2023, 44 (03): : 1570 - 1587
  • [35] Improved Double Deep Q-Network Algorithm Applied to Multi-Dimensional Environment Path Planning of Hexapod Robots
    Chen, Liuhongxu
    Wang, Qibiao
    Deng, Chao
    Xie, Bo
    Tuo, Xianguo
    Jiang, Gang
    SENSORS, 2024, 24 (07)
  • [36] Manipulation-Compliant Artificial Potential Field and Deep Q-Network: Large Ships Path Planning Based on Deep Reinforcement Learning and Artificial Potential Field
    Xu, Weifeng
    Zhu, Xiang
    Gao, Xiaori
    Li, Xiaoyong
    Cao, Jianping
    Ren, Xiaoli
    Shao, Chengcheng
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2024, 12 (08)
  • [37] Path planning of multi-UAVs based on deep Q-network for energy-efficient data collection in UAVs-assisted IoT
    Zhu, Xiumin
    Wang, Lingling
    Li, Yumei
    Song, Shudian
    Ma, Shuyue
    Yang, Feng
    Zhai, Linbo
    VEHICULAR COMMUNICATIONS, 2022, 36
  • [38] Multiple UAS Traffic Planning Based on Deep Q-Network with Hindsight Experience Replay and Economic Considerations
    Seah, Shao Xuan
    Srigrarom, Sutthiphong
    AEROSPACE, 2023, 10 (12)
  • [39] Dynamic path planning via Dueling Double Deep Q-Network (D3QN) with prioritized experience replay
    Gok, Mehmet
    APPLIED SOFT COMPUTING, 2024, 158
  • [40] Path following optimization of unmanned ships based on adaptive line-of-sight guidance and Deep Q-Network
    Wei, Gangwen
    Yang, Jie
    Proceedings - 2022 International Conference on Machine Learning and Intelligent Systems Engineering, MLISE 2022, 2022, : 288 - 291