Self-Supervised Pretraining for Cardiovascular Magnetic Resonance Cine Segmentation

被引:0
|
作者
de Mooi, Rob A. J. [1 ]
Pluim, Iris O. W. [1 ]
Scannell, Cian M. [1 ]
机构
[1] Eindhoven Univ Technol, Dept Biomed Engn, Eindhoven, Netherlands
关键词
Self-supervised learning; Image segmentation; Cardiovascular magnetic resonance; Deep learning;
D O I
10.1007/978-3-031-73748-0_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised pretraining (SSP) has shown promising results in learning from large unlabeled datasets and, thus, could be useful for automated cardiovascular magnetic resonance (CMR) short-axis cine segmentation. However, inconsistent reports of the benefits of SSP for segmentation have made it difficult to apply SSP to CMR. Therefore, this study aimed to evaluate SSP methods for CMR cine segmentation. To this end, short-axis cine stacks of 296 subjects (90618 2D slices) were used for unlabeled pretraining with four SSP methods; SimCLR, positional contrastive learning, DINO, and masked image modeling (MIM). Subsets of varying numbers of subjects were used for supervised fine-tuning of 2D models for each SSP method, as well as to train a 2D baseline model from scratch. The fine-tuned models were compared to the baseline using the 3D Dice similarity coefficient (DSC) in a test dataset of 140 subjects. The SSP methods showed no performance gains with the largest supervised fine-tuning subset compared to the baseline (DSC = 0.89). When only 10 subjects (231 2D slices) are available for supervised training, SSP using MIM (DSC = 0.86) improves over training from scratch (DSC = 0.82). This study found that SSP is valuable for CMR cine segmentation when labeled training data is scarce, but does not aid state-of-the-art deep learning methods when ample labeled data is available. Moreover, the choice of SSP method is important. The code is publicly available at: https://github.com/q-cardIA/ssp-cmr-cine-segmentation.
引用
收藏
页码:115 / 124
页数:10
相关论文
共 50 条
  • [41] Weakly-Guided Self-Supervised Pretraining for Temporal Activity Detection
    Kahatapitiya, Kumara
    Ren, Zhou
    Li, Haoxiang
    Wu, Zhenyu
    Ryoo, Michael S.
    Hua, Gang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 1078 - 1086
  • [42] SELF-SUPERVISED AUDIO ENCODER WITH CONTRASTIVE PRETRAINING FOR RESPIRATORY ANOMALY DETECTION
    Kulkarni, Shubham
    Watanabe, Hideaki
    Homma, Fuminori
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [43] Movie Box Office Prediction With Self-Supervised and Visually Grounded Pretraining
    Chao, Qin
    Kim, Eunsoo
    Li, Boyang
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1535 - 1540
  • [44] Self-Supervised Pretraining of Transformers for Satellite Image Time Series Classification
    Yuan, Yuan
    Lin, Lei
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 474 - 487
  • [45] Hierarchically Contrastive Hard Sample Mining for Graph Self-Supervised Pretraining
    Tu, Wenxuan
    Zhou, Sihang
    Liu, Xinwang
    Ge, Chunpeng
    Cai, Zhiping
    Liu, Yue
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16748 - 16761
  • [46] Self-Supervised Pretraining via Multimodality Images With Transformer for Change Detection
    Zhang, Yuxiang
    Zhao, Yang
    Dong, Yanni
    Du, Bo
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [47] SELF-SUPERVISED PRETRAINING FOR DEEP HASH-BASED IMAGE RETRIEVAL
    Yang, Haeyoon
    Jang, Young Kyun
    Kang, Isaac
    Cho, Nam Ik
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3813 - 3817
  • [48] Self-Supervised Pretraining for RGB-D Salient Object Detection
    Zhao, Xiaoqi
    Pang, Youwei
    Zhang, Lihe
    Lu, Huchuan
    Ruan, Xiang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3463 - 3471
  • [49] Self-Supervised Pretraining for Point Cloud Object Detection in Autonomous Driving
    Shi, Weijing
    Rajkumar, Ragunathan
    2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 4341 - 4348
  • [50] AudioLDM 2: Learning Holistic Audio Generation With Self-Supervised Pretraining
    Liu, Haohe
    Yuan, Yi
    Liu, Xubo
    Mei, Xinhao
    Kong, Qiuqiang
    Tian, Qiao
    Wang, Yuping
    Wang, Wenwu
    Wang, Yuxuan
    Plumbley, Mark D.
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 2871 - 2883