Masked Deformation Modeling for Volumetric Brain MRI Self-Supervised Pre-Training

被引:0
|
作者
Lyu, Junyan [1 ,2 ]
Bartlett, Perry F. [2 ]
Nasrallah, Fatima A. [2 ]
Tang, Xiaoying [1 ,3 ]
机构
[1] Southern Univ Sci & Technol, Dept Elect & Elect Engn, Shenzhen 518055, Peoples R China
[2] Univ Queensland, Queensland Brain Inst, St Lucia, Qld 4072, Australia
[3] Southern Univ Sci & Technol, Jiaxing Res Inst, Jiaxing 314031, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain; Magnetic resonance imaging; Deformation; Brain modeling; Image segmentation; Image restoration; Biomedical imaging; Annotations; Feature extraction; Lesions; Self-supervised learning; masked deformation modeling; brain segmentation; DIFFEOMORPHIC IMAGE REGISTRATION; SEGMENTATION; HIPPOCAMPUS; MORPHOMETRY; PATTERNS; RESOURCE; ATLAS;
D O I
10.1109/TMI.2024.3510922
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Self-supervised learning (SSL) has been proposed to alleviate neural networks' reliance on annotated data and to improve downstream tasks' performance, which has obtained substantial success in several volumetric medical image segmentation tasks. However, most existing approaches are designed and pre-trained on CT or MRI datasets of non-brain organs. The lack of brain prior limits those methods' performance on brain segmentation, especially on fine-grained brain parcellation. To overcome this limitation, we here propose a novel SSL strategy for MRI of the human brain, named Masked Deformation Modeling (MDM). MDM first conducts atlas-guided patch sampling on individual brain MRI scans (moving volumes) and an MNI152 template (a fixed volume). The sampled moving volumes are randomly masked in a feature-aligned manner, and then sent into a U-Net-based network to extract latent features. An intensity head and a deformation field head are used to decode the latent features, respectively restoring the masked volume and predicting the deformation field from the moving volume to the fixed volume. The proposed MDM is fine-tuned and evaluated on three brain parcellation datasets with different granularities (JHU, Mindboggle-101, CANDI), a brain lesion segmentation dataset (ATLAS2), and a brain tumor segmentation dataset (BraTS21). Results demonstrate that MDM outperforms various state-of-the-art medical SSL methods by considerable margins, and can effectively reduce the annotation effort by at least 40%. Codes and pre-trained weights will be released at https://github.com/CRazorback/MDM.
引用
收藏
页码:1596 / 1607
页数:12
相关论文
共 50 条
  • [41] Joint Encoder-Decoder Self-Supervised Pre-training for ASR
    Arunkumar, A.
    Umesh, S.
    INTERSPEECH 2022, 2022, : 3418 - 3422
  • [42] ENHANCING THE DOMAIN ROBUSTNESS OF SELF-SUPERVISED PRE-TRAINING WITH SYNTHETIC IMAGES
    Hassan, Mohamad N. C.
    Bhattacharya, Avigyan
    da Costa, Victor G. Turrisi
    Banerjee, Biplab
    Ricci, Elisa
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 5470 - 5474
  • [43] Individualized Stress Mobile Sensing Using Self-Supervised Pre-Training
    Islam, Tanvir
    Washington, Peter
    APPLIED SCIENCES-BASEL, 2023, 13 (21):
  • [44] Stabilizing Label Assignment for Speech Separation by Self-supervised Pre-training
    Huang, Sung-Feng
    Chuang, Shun-Po
    Liu, Da-Rong
    Chen, Yi-Chen
    Yang, Gene-Ping
    Lee, Hung-yi
    INTERSPEECH 2021, 2021, : 3056 - 3060
  • [45] Self-Supervised Pre-training for Protein Embeddings Using Tertiary Structures
    Guo, Yuzhi
    Wu, Jiaxiang
    Ma, Hehuan
    Huang, Junzhou
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 6801 - 6809
  • [46] Progressive self-supervised learning: A pre-training method for crowd counting
    Gu, Yao
    Zheng, Zhe
    Wu, Yingna
    Xie, Guangping
    Ni, Na
    PATTERN RECOGNITION LETTERS, 2025, 188 : 148 - 154
  • [47] DialogueBERT: A Self-Supervised Learning based Dialogue Pre-training Encoder
    Zhang, Zhenyu
    Guo, Tao
    Chen, Meng
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3647 - 3651
  • [48] SslTransT: Self-supervised pre-training visual object tracking with Transformers
    Cai, Yannan
    Tan, Ke
    Wei, Zhenzhong
    OPTICS COMMUNICATIONS, 2024, 557
  • [49] Self-supervised Heterogeneous Graph Pre-training Based on Structural Clustering
    Yang, Yaming
    Guan, Ziyu
    Wang, Zhe
    Zhao, Wei
    Xu, Cai
    Lu, Weigang
    Huang, Jianbin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [50] GUIDED CONTRASTIVE SELF-SUPERVISED PRE-TRAINING FOR AUTOMATIC SPEECH RECOGNITION
    Khare, Aparna
    Wu, Minhua
    Bhati, Saurabhchand
    Droppo, Jasha
    Maas, Roland
    2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 174 - 181