U-Net based deep learning bladder segmentation in CT urography

被引:53
|
作者
Ma, Xiangyuan [1 ,2 ,3 ]
Hadjiiski, Lubomir M. [1 ]
Wei, Jun [1 ]
Chan, Heang-Ping [1 ]
Cha, Kenny H. [1 ]
Cohan, Richard H. [1 ]
Caoili, Elaine M. [1 ]
Samala, Ravi [1 ]
Zhou, Chuan [1 ]
Lu, Yao [2 ,3 ]
机构
[1] Univ Michigan, Dept Radiol, Ann Arbor, MI 48109 USA
[2] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou 510275, Guangdong, Peoples R China
[3] Sun Yat Sen Univ, Guangdong Prov Key Lab Computat Sci, Guangzhou 510275, Guangdong, Peoples R China
关键词
bladder; computer-aided detection; CT urography; deep learning; segmentation; CONVOLUTION NEURAL-NETWORK; MULTIDETECTOR ROW CT; WALL SEGMENTATION; MASS;
D O I
10.1002/mp.13438
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
ObjectivesTo develop a U-Net-based deep learning approach (U-DL) for bladder segmentation in computed tomography urography (CTU) as a part of a computer-assisted bladder cancer detection and treatment response assessment pipeline. Materials and methodsA dataset of 173 cases including 81 cases in the training/validation set (42 masses, 21 with wall thickening, 18 normal bladders), and 92 cases in the test set (43 masses, 36 with wall thickening, 13 normal bladders) were used with Institutional Review Board approval. An experienced radiologist provided three-dimensional (3D) hand outlines for all cases as the reference standard. We previously developed a bladder segmentation method that used a deep learning convolution neural network and level sets (DCNN-LS) within a user-input bounding box. However, some cases with poor image quality or with advanced bladder cancer spreading into the neighboring organs caused inaccurate segmentation. We have newly developed an automated U-DL method to estimate a likelihood map of the bladder in CTU. The U-DL did not require a user-input box and the level sets for postprocessing. To identify the best model for this task, we compared the following models: (a) two-dimensional (2D) U-DL and 3D U-DL using 2D CT slices and 3D CT volumes, respectively, as input, (b) U-DLs using CT images of different resolutions as input, and (c) U-DLs with and without automated cropping of the bladder as an image preprocessing step. The segmentation accuracy relative to the reference standard was quantified by six measures: average volume intersection ratio (AVI), average percent volume error (AVE), average absolute volume error (AAVE), average minimum distance (AMD), average Hausdorff distance (AHD), and the average Jaccard index (AJI). As a baseline, the results from our previous DCNN-LS method were used. ResultsIn the test set, the best 2D U-DL model achieved AVI, AVE, AAVE, AMD, AHD, and AJI values of 93.49.5%, -4.2 +/- 14.2%, 9.2 +/- 11.5%, 2.7 +/- 2.5mm, 9.7 +/- 7.6mm, 85.0 +/- 11.3%, respectively, while the corresponding measures by the best 3D U-DL were 90.6 +/- 11.9%, -2.3 +/- 21.7%, 11.5 +/- 18.5%, 3.1 +/- 3.2mm, 11.4 +/- 10.0mm, and 82.6 +/- 14.2%, respectively. For comparison, the corresponding values obtained with the baseline method were 81.9 +/- 12.1%, 10.2 +/- 16.2%, 14.0 +/- 13.0%, 3.6 +/- 2.0mm, 12.8 +/- 6.1mm, and 76.2 +/- 11.8%, respectively, for the same test set. The improvement for all measures between the best U-DL and the DCNN-LS were statistically significant (P<0.001). ConclusionCompared to a previous DCNN-LS method, which depended on a user-input bounding box, the U-DL provided more accurate bladder segmentation and was more automated than the previous approach.
引用
收藏
页码:1752 / 1765
页数:14
相关论文
共 50 条
  • [31] Deep Upscale U-Net for automatic tongue segmentation
    Worapan Kusakunniran
    Thanandon Imaromkul
    Sophon Mongkolluksamee
    Kittikhun Thongkanchorn
    Panrasee Ritthipravat
    Pimchanok Tuakta
    Paitoon Benjapornlert
    Medical & Biological Engineering & Computing, 2024, 62 : 1751 - 1762
  • [32] Deep Upscale U-Net for automatic tongue segmentation
    Kusakunniran, Worapan
    Imaromkul, Thanandon
    Mongkolluksamee, Sophon
    Thongkanchorn, Kittikhun
    Ritthipravat, Panrasee
    Tuakta, Pimchanok
    Benjapornlert, Paitoon
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, 62 (06) : 1751 - 1762
  • [33] LEA U-Net: a U-Net-based deep learning framework with local feature enhancement and attention for retinal vessel segmentation
    Ouyang, Jihong
    Liu, Siguang
    Peng, Hao
    Garg, Harish
    Thanh, Dang N. H.
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (06) : 6753 - 6766
  • [34] LEA U-Net: a U-Net-based deep learning framework with local feature enhancement and attention for retinal vessel segmentation
    Jihong Ouyang
    Siguang Liu
    Hao Peng
    Harish Garg
    Dang N. H. Thanh
    Complex & Intelligent Systems, 2023, 9 : 6753 - 6766
  • [35] A Novel U-Net Based Deep Learning Method for 3D Cardiovascular MRI Segmentation
    Lu, Yinan
    Zhao, Yan
    Chen, Xing
    Guo, Xiaoxin
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [36] Deep Learning Segmentation and Classification for Urban Village Using a Worldview Satellite Image Based on U-Net
    Pan, Zhuokun
    Xu, Jiashu
    Guo, Yubin
    Hu, Yueming
    Wang, Guangxing
    REMOTE SENSING, 2020, 12 (10)
  • [37] DA-Capnet: Dual Attention Deep Learning Based on U-Net for Nailfold Capillary Segmentation
    Hariyani, Yuli Sun
    Eom, Heesang
    Park, Cheolsoo
    IEEE ACCESS, 2020, 8 : 10543 - 10553
  • [38] Retinal blood vessel segmentation using a deep learning method based on modified U-NET model
    Yadav, Arun Kumar
    Akbar, Mohd
    Kumar, Mohit
    Yadav, Divakar
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (35) : 82659 - 82678
  • [39] U-Net Based Segmentation and Characterization of Gliomas
    Kihira, Shingo
    Mei, Xueyan
    Mahmoudi, Keon
    Liu, Zelong
    Dogra, Siddhant
    Belani, Puneet
    Tsankova, Nadejda
    Hormigo, Adilia
    Fayad, Zahi A.
    Doshi, Amish
    Nael, Kambiz
    CANCERS, 2022, 14 (18)
  • [40] Iris Segmentation based on an Optimized U-Net
    Abdalla, Sabry M.
    Omelina, Lubos
    Cornelis, Jan
    Jansen, Bart
    BIOSIGNALS: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES - VOL 4: BIOSIGNALS, 2022, : 176 - 183