Uncertainty quantification and attention-aware fusion guided multi-modal MR brain tumor segmentation

被引:6
|
作者
Zhou, Tongxue [1 ]
Zhu, Shan [2 ]
机构
[1] Hangzhou Normal Univ, Sch Informat Sci & Technol, Hangzhou 311121, Peoples R China
[2] Hangzhou Normal Univ, Sch Life & Environm Sci, Hangzhou 311121, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain tumor segmentation; Uncertainty quantification; Feature fusion; Multi-modality; Deep learning; MECHANISM;
D O I
10.1016/j.compbiomed.2023.107142
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Brain tumor is one of the most aggressive cancers in the world, accurate brain tumor segmentation plays a critical role in clinical diagnosis and treatment planning. Although deep learning models have presented remarkable success in medical segmentation, they can only obtain the segmentation map without capturing the segmentation uncertainty. To achieve accurate and safe clinical results, it is necessary to produce extra uncertainty maps to assist the subsequent segmentation revision. To this end, we propose to exploit the uncertainty quantification in the deep learning model and apply it to multi-modal brain tumor segmentation. In addition, we develop an effective attention-aware multi-modal fusion method to learn the complimentary feature information from the multiple MR modalities. First, a multi-encoder-based 3D U-Net is proposed to obtain the initial segmentation results. Then, an estimated Bayesian model is presented to measure the uncertainty of the initial segmentation results. Finally, the obtained uncertainty maps are integrated into a deep learning-based segmentation network, serving as an additional constraint information to further refine the segmentation results. The proposed network is evaluated on publicly available BraTS 2018 and BraTS 2019 datasets. The experimental results demonstrate that the proposed method outperforms the previous state-of-theart methods on Dice score, Hausdorff distance and Sensitivity metrics. Furthermore, the proposed components could be easily applied to other network architectures and other computer vision fields.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Brain Tumor Segmentation for Multi-Modal MRI with Missing Information
    Xue Feng
    Kanchan Ghimire
    Daniel D. Kim
    Rajat S. Chandra
    Helen Zhang
    Jian Peng
    Binghong Han
    Gaofeng Huang
    Quan Chen
    Sohil Patel
    Chetan Bettagowda
    Haris I. Sair
    Craig Jones
    Zhicheng Jiao
    Li Yang
    Harrison Bai
    Journal of Digital Imaging, 2023, 36 (5) : 2075 - 2087
  • [22] Brain Tumor Segmentation for Multi-Modal MRI with Missing Information
    Feng, Xue
    Ghimire, Kanchan
    Kim, Daniel D.
    Chandra, Rajat S.
    Zhang, Helen
    Peng, Jian
    Han, Binghong
    Huang, Gaofeng
    Chen, Quan
    Patel, Sohil
    Bettagowda, Chetan
    Sair, Haris I.
    Jones, Craig
    Jiao, Zhicheng
    Yang, Li
    Bai, Harrison
    JOURNAL OF DIGITAL IMAGING, 2023, 36 (05) : 2075 - 2087
  • [23] A Generative Model for Brain Tumor Segmentation in Multi-Modal Images
    Menze, Bjoern H.
    Van Leemput, Koen
    Lashkari, Danial
    Weber, Marc-Andre
    Ayache, Nicholas
    Golland, Polina
    MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2010, PT II,, 2010, 6362 : 151 - +
  • [24] Multi-modal brain tumor image segmentation based on SDAE
    Ding, Yi
    Dong, Rongfeng
    Lan, Tian
    Li, Xuerui
    Shen, Guangyu
    Chen, Hao
    Qin, Zhiguang
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2018, 28 (01) : 38 - 47
  • [25] CMAF-Net: a cross-modal attention fusion-based deep neural network for incomplete multi-modal brain tumor segmentation
    Sun, Kangkang
    Ding, Jiangyi
    Li, Qixuan
    Chen, Wei
    Zhang, Heng
    Sun, Jiawei
    Jiao, Zhuqing
    Ni, Xinye
    QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2024, 14 (07) : 4579 - 4604
  • [26] Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation
    Zhou, Ziqi
    Guo, Xinna
    Yang, Wanqi
    Shi, Yinghuan
    Zhou, Luping
    Wang, Lei
    Yang, Ming
    MACHINE LEARNING IN MEDICAL IMAGING (MLMI 2019), 2019, 11861 : 601 - 610
  • [27] NaMa: Neighbor-Aware Multi-Modal Adaptive Learning for Prostate Tumor Segmentation on Anisotropic MR Images
    Meng, Runqi
    Zhang, Xiao
    Huang, Shijie
    Gu, Yuning
    Liu, Guiqin
    Wu, Guangyu
    Wang, Nizhuan
    Sun, Kaicong
    Shen, Dinggang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4198 - 4206
  • [28] Local-to-Global Cross-Modal Attention-Aware Fusion for HSI-X Semantic Segmentation
    Zhang, Xuming
    Yokoya, Naoto
    Gu, Xingfa
    Tian, Qingjiu
    Bruzzone, Lorenzo
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [29] Local extreme map guided multi-modal brain image fusion
    Zhang, Yu
    Xiang, Wenhao
    Zhang, Shunli
    Shen, Jianjun
    Wei, Ran
    Bai, Xiangzhi
    Zhang, Li
    Zhang, Qing
    FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [30] Multi-category Graph Reasoning for Multi-modal Brain Tumor Segmentation
    Li, Dongzhe
    Yang, Baoyao
    Zhan, Weide
    He, Xiaochen
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT VIII, 2024, 15008 : 445 - 455