A Heuristically Accelerated Reinforcement Learning-Based Neurosurgical Path Planner

被引:0
|
作者
Ji G. [1 ,2 ]
Gao Q. [1 ,2 ]
Zhang T. [2 ]
Cao L. [3 ]
Sun Z. [1 ,2 ]
机构
[1] School of Science and Engineering, Chinese Univ of Hong Kong, Shenzhen
[2] Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen
[3] Department of Automatic Control and Systems Engineering, University of Sheffield
来源
关键词
Compendex;
D O I
10.34133/CBSYSTEMS.0026
中图分类号
学科分类号
摘要
The steerable needle becomes appealing in the neurosurgery intervention procedure because of its flexibility to bypass critical regions inside the brain; with proper path planning, it can also minimize the potential damage by setting constraints and optimizing the insertion path. Recently, reinforcement learning (RL)-based path planning algorithm has shown promising results in neurosurgery, but because of the trial and error mechanism, it can be computationally expensive and insecure with low training efficiency. In this paper, we propose a heuristically accelerated deep Q network (DQN) algorithm to safely preoperatively plan a needle insertion path in a neurosurgical environment. Furthermore, a fuzzy inference system is integrated into the framework as a balance of the heuristic policy and the RL algorithm. Simulations are conducted to test the proposed method in comparison to the traditional greedy heuristic searching algorithm and DQN algorithms. Tests showed promising results of our algorithm in saving over 50 training episodes, calculating path lengths of 0.35 after normalization, which is 0.61 and 0.39 for DQN and traditional greedy heuristic searching algorithm, respectively. Moreover, the maximum curvature during planning is reduced to 0.046 from 0.139 mm−1 using the proposed algorithm compared to DQN. Copyright © 2023 Guanglin Ji et al.
引用
收藏
相关论文
共 50 条
  • [31] Transfer Learning-Based Accelerated Deep Reinforcement Learning for 5G RAN Slicing
    Nagib, Ahmad M.
    Abou-Zeid, Hatem
    Hassanein, Hossam S.
    PROCEEDINGS OF THE IEEE 46TH CONFERENCE ON LOCAL COMPUTER NETWORKS (LCN 2021), 2021, : 249 - 256
  • [32] Reinforcement Learning-based path tracking for underactuated UUV under intermittent communication
    Liu Z.
    Cai W.
    Zhang M.
    Ocean Engineering, 2023, 288
  • [33] Rescue path planning for urban flood: A deep reinforcement learning-based approach
    Li, Xiao-Yan
    Wang, Xia
    RISK ANALYSIS, 2024,
  • [34] Curriculum reinforcement learning-based drifting along a general path for autonomous vehicles
    Yu, Kai
    Fu, Mengyin
    Tian, Xiaohui
    Yang, Shuaicong
    Yang, Yi
    ROBOTICA, 2024, 42 (10) : 3263 - 3280
  • [35] Deep reinforcement learning-based controller for path following of an unmanned surface vehicle
    Woo, Joohyun
    Yu, Chanwoo
    Kim, Nakwan
    OCEAN ENGINEERING, 2019, 183 : 155 - 166
  • [36] Reinforcement learning-based radar-evasive path planning: a comparative analysis
    Hameed, R. U.
    Maqsood, A.
    Hashmi, A. J.
    Saeed, M. T.
    Riaz, R.
    AERONAUTICAL JOURNAL, 2022, 126 (1297): : 547 - 564
  • [37] Deep Reinforcement Learning-Based Robotic Puncturing Path Planning of Flexible Needle
    Lin, Jun
    Huang, Zhiqiang
    Zhu, Tengliang
    Leng, Jiewu
    Huang, Kai
    PROCESSES, 2024, 12 (12)
  • [38] Reinforcement Learning-based Path Following Control for a Vehicle with Variable Delay in the Drivetrain
    Ultsch, Johannes
    Mirwald, Jonas
    Brembeck, Jonathan
    de Castro, Ricardo
    2020 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2020, : 532 - 539
  • [39] Reinforcement Learning-Based Approach to Robot Path Tracking in Nonlinear Dynamic Environments
    Chen, Wei
    Zhou, Zebin
    INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS, 2024, 21 (04)
  • [40] Reinforcement learning-based fuzzy controller for autonomous guided vehicle path tracking
    Kuo, Ping-Huan
    Chen, Sing-Yan
    Feng, Po-Hsun
    Chang, Chen-Wen
    Huang, Chiou-Jye
    Peng, Chao-Chung
    ADVANCED ENGINEERING INFORMATICS, 2025, 65