USVs Path Planning for Maritime Search and Rescue Based on POS-DQN: Probability of Success-Deep Q-Network

被引:0
|
作者
Liu, Lu [1 ]
Shan, Qihe [1 ]
Xu, Qi [2 ]
机构
[1] Dalian Maritime Univ, Nav Coll, Dalian 116026, Peoples R China
[2] Zhejiang Lab, Res Inst Intelligent Networks, Hangzhou 311121, Peoples R China
基金
中国国家自然科学基金;
关键词
SAR; task allocation; deep reinforcement learning;
D O I
10.3390/jmse12071158
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
Efficient maritime search and rescue (SAR) is crucial for responding to maritime emergencies. In traditional SAR, fixed search path planning is inefficient and cannot prioritize high-probability regions, which has significant limitations. To solve the above problems, this paper proposes unmanned surface vehicles (USVs) path planning for maritime SAR based on POS-DQN so that USVs can perform SAR tasks reasonably and efficiently. Firstly, the search region is allocated as a whole using an improved task allocation algorithm so that the task region of each USV has priority and no duplication. Secondly, this paper considers the probability of success (POS) of the search environment and proposes a POS-DQN algorithm based on deep reinforcement learning. This algorithm can adapt to the complex and changing environment of SAR. It designs a probability weight reward function and trains USV agents to obtain the optimal search path. Finally, based on the simulation results, by considering the complete coverage of obstacle avoidance and collision avoidance, the search path using this algorithm can prioritize high-probability regions and improve the efficiency of SAR.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Research on the Local Path Planning for Mobile Robots Based on PRO-Dueling Deep Q-Network (DQN) Algorithm
    Zhang, Yaoyu
    Li, Caihong
    Zhang, Guosheng
    Zhou, Ruihong
    Liang, Zhenying
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (08) : 381 - 387
  • [2] Path planning for unmanned vehicle reconnaissance based on deep Q-network
    Xia, Yuqi
    Huang, Yanyan
    Chen, Qia
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2024, 46 (09): : 3070 - 3081
  • [3] Multirobot Coverage Path Planning Based on Deep Q-Network in Unknown Environment
    Li, Wenhao
    Zhao, Tao
    Dian, Songyi
    JOURNAL OF ROBOTICS, 2022, 2022
  • [4] Tuning Apex DQN: A Reinforcement Learning based Deep Q-Network Algorithm
    Ruhela, Dhani
    Ruhela, Amit
    PRACTICE AND EXPERIENCE IN ADVANCED RESEARCH COMPUTING 2024, PEARC 2024, 2024,
  • [5] Path planning of mobile robot based on improved double deep Q-network algorithm
    Wang, Zhenggang
    Song, Shuhong
    Cheng, Shenghui
    FRONTIERS IN NEUROROBOTICS, 2025, 19
  • [6] The optimal dispatching strategy of cogeneration based on Deep Q-Network (DQN) algorithm
    Zhang, Pei
    Fu, Yan
    Yao, Fu
    SCIENCE AND TECHNOLOGY FOR ENERGY TRANSITION, 2024, 79
  • [7] Path Planning of Unmanned Helicopter in Complex Environment Based on Heuristic Deep Q-Network
    Yao, Jiangyi
    Li, Xiongwei
    Zhang, Yang
    Ji, Jingyu
    Wang, Yanchao
    Liu, Yicen
    INTERNATIONAL JOURNAL OF AEROSPACE ENGINEERING, 2022, 2022
  • [8] AGV Path Planning with Dynamic Obstacles Based on Deep Q-Network and Distributed Training
    Xie, Tingbo
    Yao, Xifan
    Jiang, Zhenhong
    Meng, Junting
    INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING-GREEN TECHNOLOGY, 2025,
  • [9] Transport robot path planning based on an advantage dueling double deep Q-network
    He Q.
    Wang Q.
    Li J.
    Wang Z.
    Wang T.
    Qinghua Daxue Xuebao/Journal of Tsinghua University, 2022, 62 (11): : 1751 - 1757
  • [10] A NOVEL PATH PLANNING FOR AUV BASED ON DUNG BEETLE OPTIMISATION ALGORITHM WITH DEEP Q-NETWORK
    Li, Baogang
    Zhang, Hanbin
    Shi, Xianpeng
    INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION, 2025, 40 (01): : 65 - 73