Self-Supervised Pretraining for Cardiovascular Magnetic Resonance Cine Segmentation

被引:0
|
作者
de Mooi, Rob A. J. [1 ]
Pluim, Iris O. W. [1 ]
Scannell, Cian M. [1 ]
机构
[1] Eindhoven Univ Technol, Dept Biomed Engn, Eindhoven, Netherlands
关键词
Self-supervised learning; Image segmentation; Cardiovascular magnetic resonance; Deep learning;
D O I
10.1007/978-3-031-73748-0_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised pretraining (SSP) has shown promising results in learning from large unlabeled datasets and, thus, could be useful for automated cardiovascular magnetic resonance (CMR) short-axis cine segmentation. However, inconsistent reports of the benefits of SSP for segmentation have made it difficult to apply SSP to CMR. Therefore, this study aimed to evaluate SSP methods for CMR cine segmentation. To this end, short-axis cine stacks of 296 subjects (90618 2D slices) were used for unlabeled pretraining with four SSP methods; SimCLR, positional contrastive learning, DINO, and masked image modeling (MIM). Subsets of varying numbers of subjects were used for supervised fine-tuning of 2D models for each SSP method, as well as to train a 2D baseline model from scratch. The fine-tuned models were compared to the baseline using the 3D Dice similarity coefficient (DSC) in a test dataset of 140 subjects. The SSP methods showed no performance gains with the largest supervised fine-tuning subset compared to the baseline (DSC = 0.89). When only 10 subjects (231 2D slices) are available for supervised training, SSP using MIM (DSC = 0.86) improves over training from scratch (DSC = 0.82). This study found that SSP is valuable for CMR cine segmentation when labeled training data is scarce, but does not aid state-of-the-art deep learning methods when ample labeled data is available. Moreover, the choice of SSP method is important. The code is publicly available at: https://github.com/q-cardIA/ssp-cmr-cine-segmentation.
引用
收藏
页码:115 / 124
页数:10
相关论文
共 50 条
  • [1] Self-Supervised Pretraining Improves Self-Supervised Pretraining
    Reed, Colorado J.
    Yue, Xiangyu
    Nrusimha, Ani
    Ebrahimi, Sayna
    Vijaykumar, Vivek
    Mao, Richard
    Li, Bo
    Zhang, Shanghang
    Guillory, Devin
    Metzger, Sean
    Keutzer, Kurt
    Darrell, Trevor
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1050 - 1060
  • [2] Self-Supervised Pretraining With Monocular Height Estimation for Semantic Segmentation
    Xiong, Zhitong
    Chen, Sining
    Shi, Yilei
    Zhu, Xiao Xiang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [3] Self-supervised pretraining improves the performance of classification of task functional magnetic resonance imaging
    Shi, Chenwei
    Wang, Yanming
    Wu, Yueyang
    Chen, Shishuo
    Hu, Rongjie
    Zhang, Min
    Qiu, Bensheng
    Wang, Xiaoxiao
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [4] Self-supervised pretraining for transferable quantitative phase image cell segmentation
    Vicar, Tomas
    Chemelik, Jiri
    Jakubicek, Roman
    Chmelikova, Larisa
    Gumulec, Jaromir
    Balvan, J. A. N.
    Provaznik, I. V. O.
    Kolar, Radim
    BIOMEDICAL OPTICS EXPRESS, 2021, 12 (10) : 6514 - 6528
  • [5] Masked autoencoder: influence of self-supervised pretraining on object segmentation in industrial images
    Anja Witte
    Sascha Lange
    Christian Lins
    Industrial Artificial Intelligence, 2 (1):
  • [6] INJECTING TEXT IN SELF-SUPERVISED SPEECH PRETRAINING
    Chen, Zhehuai
    Zhang, Yu
    Rosenberg, Andrew
    Ramabhadran, Bhuvana
    Wang, Gary
    Moreno, Pedro
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 251 - 258
  • [7] On Pretraining Data Diversity for Self-Supervised Learning
    Hammoud, Hasan Abed Al Kader
    Das, Tuhin
    Pizzati, Fabio
    Torre, Philip H. S.
    Bibi, Adel
    Ghanem, Bernard
    COMPUTER VISION - ECCV 2024, PT LVI, 2025, 15114 : 54 - 71
  • [8] SPeCiaL: Self-supervised Pretraining for Continual Learning
    Caccia, Lucas
    Pineau, Joelle
    CONTINUAL SEMI-SUPERVISED LEARNING, CSSL 2021, 2022, 13418 : 91 - 103
  • [9] Instance Localization for Self-supervised Detection Pretraining
    Yang, Ceyuan
    Wu, Zhirong
    Zhou, Bolei
    Lin, Stephen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3986 - 3995
  • [10] SurgNet: Self-Supervised Pretraining With Semantic Consistency for Vessel and Instrument Segmentation in Surgical Images
    Chen, Jiachen
    Li, Mengyang
    Han, Hu
    Zhao, Zhiming
    Chen, Xilin
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (04) : 1513 - 1525