A Heuristically Accelerated Reinforcement Learning-Based Neurosurgical Path Planner

被引:0
|
作者
Ji G. [1 ,2 ]
Gao Q. [1 ,2 ]
Zhang T. [2 ]
Cao L. [3 ]
Sun Z. [1 ,2 ]
机构
[1] School of Science and Engineering, Chinese Univ of Hong Kong, Shenzhen
[2] Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen
[3] Department of Automatic Control and Systems Engineering, University of Sheffield
来源
关键词
Compendex;
D O I
10.34133/CBSYSTEMS.0026
中图分类号
学科分类号
摘要
The steerable needle becomes appealing in the neurosurgery intervention procedure because of its flexibility to bypass critical regions inside the brain; with proper path planning, it can also minimize the potential damage by setting constraints and optimizing the insertion path. Recently, reinforcement learning (RL)-based path planning algorithm has shown promising results in neurosurgery, but because of the trial and error mechanism, it can be computationally expensive and insecure with low training efficiency. In this paper, we propose a heuristically accelerated deep Q network (DQN) algorithm to safely preoperatively plan a needle insertion path in a neurosurgical environment. Furthermore, a fuzzy inference system is integrated into the framework as a balance of the heuristic policy and the RL algorithm. Simulations are conducted to test the proposed method in comparison to the traditional greedy heuristic searching algorithm and DQN algorithms. Tests showed promising results of our algorithm in saving over 50 training episodes, calculating path lengths of 0.35 after normalization, which is 0.61 and 0.39 for DQN and traditional greedy heuristic searching algorithm, respectively. Moreover, the maximum curvature during planning is reduced to 0.046 from 0.139 mm−1 using the proposed algorithm compared to DQN. Copyright © 2023 Guanglin Ji et al.
引用
收藏
相关论文
共 50 条
  • [41] A Reinforcement Learning-Based Strategy of Path Following for Snake Robots with an Onboard Camera
    Liu, Lixing
    Guo, Xian
    Fang, Yongchun
    SENSORS, 2022, 22 (24)
  • [42] Learning-Based Disassembly Process Planner for Uncertainty Management
    Tang, Ying
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS, 2009, 39 (01): : 134 - 143
  • [43] HR-Planner: A Hierarchical Highway Tactical Planner based on Residual Reinforcement Learning
    Wu, Haoran
    Li, Yueyuan
    Zhuang, Hanyang
    Wang, Chunxiang
    Yang, Ming
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 7263 - 7269
  • [44] A Reinforcement Learning-Based Adaptive Learning System
    Shawky, Doaa
    Badawi, Ashraf
    INTERNATIONAL CONFERENCE ON ADVANCED MACHINE LEARNING TECHNOLOGIES AND APPLICATIONS (AMLTA2018), 2018, 723 : 221 - 231
  • [45] DRL-DCLP: A Deep Reinforcement Learning-Based Dimension-Configurable Local Planner for Robot Navigation
    Zhang, Wei
    Wang, Shanze
    Tan, Mingao
    Yang, Zhibo
    Wang, Xianghui
    Shen, Xiaoyu
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (04): : 3636 - 3643
  • [46] Immune deep reinforcement learning-based path planning for mobile robot in unknown environment
    Yan, Chengliang
    Chen, Guangzhu
    Li, Yang
    Sun, Fuchun
    Wu, Yuanyuan
    APPLIED SOFT COMPUTING, 2023, 145
  • [47] Deep Reinforcement Learning-Based Path Planning with Dynamic Collision Probability for Mobile Robots
    Tariq, Muhammad Taha
    Wang, Congqing
    Hussain, Yasir
    2024 WRC SYMPOSIUM ON ADVANCED ROBOTICS AND AUTOMATION, WRC SARA, 2024, : 9 - 14
  • [48] Reinforcement Learning-Based Energy-Saving Path Planning for UAVs in Turbulent Wind
    Chen, Shaonan
    Mo, Yuhong
    Wu, Xiaorui
    Xiao, Jing
    Liu, Quan
    ELECTRONICS, 2024, 13 (16)
  • [49] RETRACTED: Reinforcement Learning-Based Path Planning Algorithm for Mobile Robots (Retracted Article)
    Liu, ZiXuan
    Wang, Qingchuan
    Yang, Bingsong
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [50] Reinforcement Learning-based Hierarchical Control for Path Following of a Salamander-like Robot
    Zhang, Xueyou
    Guo, Xian
    Fang, Yongchun
    Zhu, Wei
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 6077 - 6083