Self-Supervised Pretraining for Cardiovascular Magnetic Resonance Cine Segmentation

被引:0
|
作者
de Mooi, Rob A. J. [1 ]
Pluim, Iris O. W. [1 ]
Scannell, Cian M. [1 ]
机构
[1] Eindhoven Univ Technol, Dept Biomed Engn, Eindhoven, Netherlands
关键词
Self-supervised learning; Image segmentation; Cardiovascular magnetic resonance; Deep learning;
D O I
10.1007/978-3-031-73748-0_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised pretraining (SSP) has shown promising results in learning from large unlabeled datasets and, thus, could be useful for automated cardiovascular magnetic resonance (CMR) short-axis cine segmentation. However, inconsistent reports of the benefits of SSP for segmentation have made it difficult to apply SSP to CMR. Therefore, this study aimed to evaluate SSP methods for CMR cine segmentation. To this end, short-axis cine stacks of 296 subjects (90618 2D slices) were used for unlabeled pretraining with four SSP methods; SimCLR, positional contrastive learning, DINO, and masked image modeling (MIM). Subsets of varying numbers of subjects were used for supervised fine-tuning of 2D models for each SSP method, as well as to train a 2D baseline model from scratch. The fine-tuned models were compared to the baseline using the 3D Dice similarity coefficient (DSC) in a test dataset of 140 subjects. The SSP methods showed no performance gains with the largest supervised fine-tuning subset compared to the baseline (DSC = 0.89). When only 10 subjects (231 2D slices) are available for supervised training, SSP using MIM (DSC = 0.86) improves over training from scratch (DSC = 0.82). This study found that SSP is valuable for CMR cine segmentation when labeled training data is scarce, but does not aid state-of-the-art deep learning methods when ample labeled data is available. Moreover, the choice of SSP method is important. The code is publicly available at: https://github.com/q-cardIA/ssp-cmr-cine-segmentation.
引用
收藏
页码:115 / 124
页数:10
相关论文
共 50 条
  • [21] Self-Supervised Pretraining Transformer for Seismic Data Denoising
    Wang, Hongzhou
    Lin, Jun
    Li, Yue
    Dong, Xintong
    Tong, Xunqian
    Lu, Shaoping
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 25
  • [22] Self-Supervised Monocular Depth Estimation With Extensive Pretraining
    Choi, Hyukdoo
    IEEE ACCESS, 2021, 9 : 157236 - 157246
  • [23] Self-Supervised Monocular Depth Estimation with Extensive Pretraining
    Choi, Hyukdoo
    IEEE Access, 2021, 9 : 157236 - 157246
  • [24] A Study on Self-Supervised Pretraining for Vision Problems in Gastrointestinal Endoscopy
    Sanderson, Edward
    Matuszewski, Bogdan J.
    IEEE ACCESS, 2024, 12 : 46181 - 46201
  • [25] Structured Self-Supervised Pretraining for Commonsense Knowledge Graph Completion
    Huang, Jiayuan
    Du, Yangkai
    Tao, Shuting
    Xu, Kun
    Xie, Pengtao
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2021, 9 : 1268 - 1284
  • [26] Investigating Self-supervised Pretraining Frameworks for Pathological Speech Recognition
    Violeta, Lester Phillip
    Huang, Wen-Chin
    Toda, Tomoki
    INTERSPEECH 2022, 2022, : 41 - 45
  • [27] Self-Supervised Pretraining for Large-Scale Point Clouds
    Zhang, Zaiwei
    Bai, Min
    Li, Erran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [28] Investigating Self-supervised Pretraining Frameworks for Pathological Speech Recognition
    Violeta, Lester Phillip
    Huang, Wen-Chin
    Toda, Tomoki
    arXiv, 2022,
  • [29] Self-supervised Pretraining and Finetuning for Monocular Depth and Visual Odometry
    Antsfeldi, Leonid
    Chidlovskii, Boris
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2024), 2024, : 14669 - 14676
  • [30] Does Self-Supervised Pretraining Really Match ImageNet Weights?
    Pototzky, Daniel
    Sultan, Azhar
    Schmidt-Thieme, Lars
    2022 IEEE 14TH IMAGE, VIDEO, AND MULTIDIMENSIONAL SIGNAL PROCESSING WORKSHOP (IVMSP), 2022,