Ship Collision Avoidance Using Constrained Deep Reinforcement Learning

被引:0
|
作者
Zhang, Rui [1 ]
Wang, Xiao [2 ]
Liu, Kezhong [3 ]
Wu, Xiaolie [4 ]
Lu, Tianyou [2 ]
Chao Zhaohui [2 ]
机构
[1] Wuhan Univ Technol, Sch Comp Sci & Technol, Hubei Key Lab Transportat Internet Things, Wuhan 434070, Hubei, Peoples R China
[2] Wuhan Univ Technol, Sch Comp Sci & Technol, Wuhan 434070, Hubei, Peoples R China
[3] Wuhan Univ Technol, Sch Nav, Hubei Key Lab Inland Shipping Technol, Wuhan 434070, Hubei, Peoples R China
[4] Wuhan Univ Technol, Sch Nav, Wuhan 434070, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
reinforcement learning; constraint; collision avoidance; Deep Q Network;
D O I
10.1109/BESC.2018.00031
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In recent years, the rapid development of mobile technology and application platforms has provided better services for life and work. Artificial intelligence and mobile technology have made traffic ever more convenient. As an artificial intelligence method that intersects with multiple disciplines and fields, reinforcement learning has been proved to be highly effective in the automatic driving of vehicles. However, there are still many difficulties in ship collision avoidance, because it involves continuous actions and complicated regulations. We find that by constraining the states, actions and regulation of reinforcement learning, we can well apply reinforcement learning to ship collision avoidance with vast states and actions at the same time. Hence, we propose Constrained-DQN(Deep Q Network), which is used to limit the state and action set, and separate reward value via different regulations. Experiments show that Constrained-DQN is more stable and adaptive in handling continuous space than traditional path planning algorithms.
引用
收藏
页码:115 / 120
页数:6
相关论文
共 50 条
  • [1] DEEP REINFORCEMENT LEARNING FOR SHIP COLLISION AVOIDANCE AND PATH TRACKING
    Singht, Amar Nath
    Vijayakumar, Akash
    Balasubramaniyam, Shankruth
    Somayajula, Abhilash
    PROCEEDINGS OF ASME 2024 43RD INTERNATIONAL CONFERENCE ON OCEAN, OFFSHORE AND ARCTIC ENGINEERING, OMAE2024, VOL 5B, 2024,
  • [2] Deep reinforcement learning-based collision avoidance for an autonomous ship
    Chun, Do-Hyun
    Roh, Myung-Il
    Lee, Hye-Won
    Ha, Jisang
    Yu, Donghun
    OCEAN ENGINEERING, 2021, 234
  • [3] Pedestrian Collision Avoidance Using Deep Reinforcement Learning
    Alireza Rafiei
    Amirhossein Oliaei Fasakhodi
    Farshid Hajati
    International Journal of Automotive Technology, 2022, 23 : 613 - 622
  • [4] Pedestrian Collision Avoidance Using Deep Reinforcement Learning
    Rafiei, Alireza
    Fasakhodi, Amirhossein Oliaei
    Hajati, Farshid
    INTERNATIONAL JOURNAL OF AUTOMOTIVE TECHNOLOGY, 2022, 23 (03) : 613 - 622
  • [5] Automatic ship collision avoidance using deep reinforcement learning with LSTM in continuous action spaces
    Sawada, Ryohei
    Sato, Keiji
    Majima, Takahiro
    JOURNAL OF MARINE SCIENCE AND TECHNOLOGY, 2021, 26 (02) : 509 - 524
  • [6] Automatic ship collision avoidance using deep reinforcement learning with LSTM in continuous action spaces
    Ryohei Sawada
    Keiji Sato
    Takahiro Majima
    Journal of Marine Science and Technology, 2021, 26 : 509 - 524
  • [7] Collision avoidance for autonomous ship using deep reinforcement learning and prior-knowledge-based approximate representation
    Wang, Chengbo
    Zhang, Xinyu
    Yang, Zaili
    Bashir, Musa
    Lee, Kwangil
    FRONTIERS IN MARINE SCIENCE, 2023, 9
  • [8] TRANSFER REINFORCEMENT LEARNING: FEATURE TRANSFERABILITY IN SHIP COLLISION AVOIDANCE
    Wang, Xinrui
    Jin, Yan
    PROCEEDINGS OF ASME 2023 INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, IDETC-CIE2023, VOL 3B, 2023,
  • [9] Collision avoidance for an unmanned surface vehicle using deep reinforcement learning
    Woo, Joohyun
    Kim, Nakwan
    OCEAN ENGINEERING, 2020, 199
  • [10] Space Manipulator Collision Avoidance Using a Deep Reinforcement Learning Control
    Blaise, James
    Bazzocchi, Michael C. F.
    AEROSPACE, 2023, 10 (09)