Masked Deformation Modeling for Volumetric Brain MRI Self-Supervised Pre-Training

被引:0
|
作者
Lyu, Junyan [1 ,2 ]
Bartlett, Perry F. [2 ]
Nasrallah, Fatima A. [2 ]
Tang, Xiaoying [1 ,3 ]
机构
[1] Southern Univ Sci & Technol, Dept Elect & Elect Engn, Shenzhen 518055, Peoples R China
[2] Univ Queensland, Queensland Brain Inst, St Lucia, Qld 4072, Australia
[3] Southern Univ Sci & Technol, Jiaxing Res Inst, Jiaxing 314031, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain; Magnetic resonance imaging; Deformation; Brain modeling; Image segmentation; Image restoration; Biomedical imaging; Annotations; Feature extraction; Lesions; Self-supervised learning; masked deformation modeling; brain segmentation; DIFFEOMORPHIC IMAGE REGISTRATION; SEGMENTATION; HIPPOCAMPUS; MORPHOMETRY; PATTERNS; RESOURCE; ATLAS;
D O I
10.1109/TMI.2024.3510922
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Self-supervised learning (SSL) has been proposed to alleviate neural networks' reliance on annotated data and to improve downstream tasks' performance, which has obtained substantial success in several volumetric medical image segmentation tasks. However, most existing approaches are designed and pre-trained on CT or MRI datasets of non-brain organs. The lack of brain prior limits those methods' performance on brain segmentation, especially on fine-grained brain parcellation. To overcome this limitation, we here propose a novel SSL strategy for MRI of the human brain, named Masked Deformation Modeling (MDM). MDM first conducts atlas-guided patch sampling on individual brain MRI scans (moving volumes) and an MNI152 template (a fixed volume). The sampled moving volumes are randomly masked in a feature-aligned manner, and then sent into a U-Net-based network to extract latent features. An intensity head and a deformation field head are used to decode the latent features, respectively restoring the masked volume and predicting the deformation field from the moving volume to the fixed volume. The proposed MDM is fine-tuned and evaluated on three brain parcellation datasets with different granularities (JHU, Mindboggle-101, CANDI), a brain lesion segmentation dataset (ATLAS2), and a brain tumor segmentation dataset (BraTS21). Results demonstrate that MDM outperforms various state-of-the-art medical SSL methods by considerable margins, and can effectively reduce the annotation effort by at least 40%. Codes and pre-trained weights will be released at https://github.com/CRazorback/MDM.
引用
收藏
页码:1596 / 1607
页数:12
相关论文
共 50 条
  • [21] Object Adaptive Self-Supervised Dense Visual Pre-Training
    Zhang, Yu
    Zhang, Tao
    Zhu, Hongyuan
    Chen, Zihan
    Mi, Siya
    Peng, Xi
    Geng, Xin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 2228 - 2240
  • [22] UniVIP: A Unified Framework for Self-Supervised Visual Pre-training
    Li, Zhaowen
    Zhu, Yousong
    Yang, Fan
    Li, Wei
    Zhao, Chaoyang
    Chen, Yingying
    Chen, Zhiyang
    Xie, Jiahao
    Wu, Liwei
    Zhao, Rui
    Tang, Ming
    Wang, Jinqiao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14607 - 14616
  • [23] Representation Recovering for Self-Supervised Pre-training on Medical Images
    Yan, Xiangyi
    Naushad, Junayed
    Sun, Shanlin
    Han, Kun
    Tang, Hao
    Kong, Deying
    Ma, Haoyu
    You, Chenyu
    Xie, Xiaohui
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2684 - 2694
  • [24] Reducing Domain mismatch in Self-supervised speech pre-training
    Baskar, Murali Karthick
    Rosenberg, Andrew
    Ramabhadran, Bhuvana
    Zhang, Yu
    INTERSPEECH 2022, 2022, : 3028 - 3032
  • [25] Dense Contrastive Learning for Self-Supervised Visual Pre-Training
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    Li, Lei
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3023 - 3032
  • [26] Self-supervised VICReg pre-training for Brugada ECG detection
    Ronan, Robert
    Tarabanis, Constantine
    Chinitz, Larry
    Jankelson, Lior
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [27] A Self-Supervised Pre-Training Method for Chinese Spelling Correction
    Su J.
    Yu S.
    Hong X.
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2023, 51 (09): : 90 - 98
  • [28] Self-supervised pre-training on industrial time-series
    Biggio, Luca
    Kastanis, Iason
    2021 8TH SWISS CONFERENCE ON DATA SCIENCE, SDS, 2021, : 56 - 57
  • [29] Self-supervised Pre-training for Semantic Segmentation in an Indoor Scene
    Shrestha, Sulabh
    Li, Yimeng
    Kosecka, Jana
    2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024, 2024, : 625 - 635
  • [30] DiT: Self-supervised Pre-training for Document Image Transformer
    Li, Junlong
    Xu, Yiheng
    Lv, Tengchao
    Cui, Lei
    Zhang, Cha
    Wei, Furu
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3530 - 3539