Deep Reinforcement Learning for Vessel Centerline Tracing in Multi-modality 3D Volumes

被引:28
|
作者
Zhang, Pengyue [1 ,2 ]
Wang, Fusheng [1 ]
Zheng, Yefeng [2 ]
机构
[1] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
[2] Siemens Healthineers, Med Imaging Technol, Princeton, NJ 08540 USA
基金
美国国家科学基金会;
关键词
D O I
10.1007/978-3-030-00937-3_86
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Accurate vessel centerline tracing greatly benefits vessel centerline geometry assessment and facilitates precise measurements of vessel diameters and lengths. However, cursive and longitudinal geometries of vessels make centerline tracing a challenging task in volumetric images. Treating the problem with traditional feature handcrafting is often adhoc and time-consuming, resulting in suboptimal solutions. In this work, we propose a unified end-to-end deep reinforcement learning approach for robust vessel centerline tracing in multi-modality 3D medical volumes. Instead of time-consuming exhaustive search in 3D space, we propose to learn an artificial agent to interact with surrounding environment and collect rewards from the interaction. A deep neural network is integrated to the system to predict stepwise action value for every possible actions. With this mechanism, the agent is able to probe through an optimal navigation path to trace the vessel centerline. Our proposed approach is evaluated on a dataset of over 2,000 3D volumes with diverse imaging modalities, including contrasted CT, non-contrasted CT, C-arm CT and MR images. The experimental results show that the proposed approach can handle large variations from vessel shape to imaging characteristics, with a tracing error as low as 3.28mm and detection time as fast as 1.71 s per volume.
引用
收藏
页码:755 / 763
页数:9
相关论文
共 50 条
  • [1] 3D shape recognition and retrieval based on multi-modality deep learning
    Bu, Shuhui
    Wang, Lei
    Han, Pengcheng
    Liu, Zhenbao
    Li, Ke
    NEUROCOMPUTING, 2017, 259 : 183 - 193
  • [2] A 3D Multi-Modality Lung Tumor Segmentation Method Based on Deep Learning
    Wang, S.
    Yuan, L.
    Weiss, E.
    Mahon, R.
    MEDICAL PHYSICS, 2021, 48 (06)
  • [3] CoroEval: a multi-platform, multi-modality tool for the evaluation of 3D coronary vessel reconstructions
    Schwemmer, C.
    Forman, C.
    Wetzl, J.
    Maier, A.
    Hornegger, J.
    PHYSICS IN MEDICINE AND BIOLOGY, 2014, 59 (17): : 5163 - 5174
  • [4] COCGV: A method for multi-modality 3D volume registration
    Ostuni, JL
    Hsu, L
    Frank, JA
    1998 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING - PROCEEDINGS, VOL 2, 1998, : 25 - 28
  • [5] 3D Thermo Scan - Multi-Modality Image Registration
    de Souza, Mauren Abreu
    Krefer, Andriy G.
    Borba, Gustavo Benvenutti
    Gamba, Humberto R.
    PROCEEDINGS OF THE 2016 SAI COMPUTING CONFERENCE (SAI), 2016, : 302 - 306
  • [6] Multi-modality self-attention aware deep network for 3D biomedical segmentation
    Xibin Jia
    Yunfeng Liu
    Zhenghan Yang
    Dawei Yang
    BMC Medical Informatics and Decision Making, 20
  • [7] Multi-modality self-attention aware deep network for 3D biomedical segmentation
    Jia, Xibin
    Liu, Yunfeng
    Yang, Zhenghan
    Yang, Dawei
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2020, 20 (Suppl 3)
  • [8] ON THE CAPABILITIES OF A MULTI-MODALITY 3D BIOPRINTER FOR CUSTOMIZED BIOMEDICAL DEVICES
    Ravi, Prashanth
    Shiakolas, Panos S.
    Welch, Tre
    Saini, Tushar
    Guleserian, Kristine
    Batra, Ankit K.
    PROCEEDINGS OF THE ASME INTERNATIONAL MECHANICAL ENGINEERING CONGRESS AND EXPOSITION, 2015, VOL 2A, 2016,
  • [9] MULTI-MODALITY ANALYSIS OF A 3D PRINTED BIOCOMPATIABLE POLYMER SCAFFOLD
    Sutherland, Nigel
    Shen, Yihong
    Li, Qin
    Zhang, Lihai
    Mo, Xiumei
    van Gaal, William Joseph, III
    Barlis, Peter
    Poon, Eric
    JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY, 2022, 79 (09) : 2018 - 2018
  • [10] Multi-modality 3D object detection in autonomous driving: A review
    Tang, Yingjuan
    He, Hongwen
    Wang, Yong
    Mao, Zan
    Wang, Haoyu
    NEUROCOMPUTING, 2023, 553