Masked Deformation Modeling for Volumetric Brain MRI Self-Supervised Pre-Training

被引:0
|
作者
Lyu, Junyan [1 ,2 ]
Bartlett, Perry F. [2 ]
Nasrallah, Fatima A. [2 ]
Tang, Xiaoying [1 ,3 ]
机构
[1] Southern Univ Sci & Technol, Dept Elect & Elect Engn, Shenzhen 518055, Peoples R China
[2] Univ Queensland, Queensland Brain Inst, St Lucia, Qld 4072, Australia
[3] Southern Univ Sci & Technol, Jiaxing Res Inst, Jiaxing 314031, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain; Magnetic resonance imaging; Deformation; Brain modeling; Image segmentation; Image restoration; Biomedical imaging; Annotations; Feature extraction; Lesions; Self-supervised learning; masked deformation modeling; brain segmentation; DIFFEOMORPHIC IMAGE REGISTRATION; SEGMENTATION; HIPPOCAMPUS; MORPHOMETRY; PATTERNS; RESOURCE; ATLAS;
D O I
10.1109/TMI.2024.3510922
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Self-supervised learning (SSL) has been proposed to alleviate neural networks' reliance on annotated data and to improve downstream tasks' performance, which has obtained substantial success in several volumetric medical image segmentation tasks. However, most existing approaches are designed and pre-trained on CT or MRI datasets of non-brain organs. The lack of brain prior limits those methods' performance on brain segmentation, especially on fine-grained brain parcellation. To overcome this limitation, we here propose a novel SSL strategy for MRI of the human brain, named Masked Deformation Modeling (MDM). MDM first conducts atlas-guided patch sampling on individual brain MRI scans (moving volumes) and an MNI152 template (a fixed volume). The sampled moving volumes are randomly masked in a feature-aligned manner, and then sent into a U-Net-based network to extract latent features. An intensity head and a deformation field head are used to decode the latent features, respectively restoring the masked volume and predicting the deformation field from the moving volume to the fixed volume. The proposed MDM is fine-tuned and evaluated on three brain parcellation datasets with different granularities (JHU, Mindboggle-101, CANDI), a brain lesion segmentation dataset (ATLAS2), and a brain tumor segmentation dataset (BraTS21). Results demonstrate that MDM outperforms various state-of-the-art medical SSL methods by considerable margins, and can effectively reduce the annotation effort by at least 40%. Codes and pre-trained weights will be released at https://github.com/CRazorback/MDM.
引用
收藏
页码:1596 / 1607
页数:12
相关论文
共 50 条
  • [31] FALL DETECTION USING SELF-SUPERVISED PRE-TRAINING MODEL
    Yhdego, Haben
    Audette, Michel
    Paolini, Christopher
    PROCEEDINGS OF THE 2022 ANNUAL MODELING AND SIMULATION CONFERENCE (ANNSIM'22), 2022, : 361 - 371
  • [32] CDS: Cross-Domain Self-supervised Pre-training
    Kim, Donghyun
    Saito, Kuniaki
    Oh, Tae-Hyun
    Plummer, Bryan A.
    Sclaroff, Stan
    Saenko, Kate
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9103 - 9112
  • [33] SPAKT: A Self-Supervised Pre-TrAining Method for Knowledge Tracing
    Ma, Yuling
    Han, Peng
    Qiao, Huiyan
    Cui, Chaoran
    Yin, Yilong
    Yu, Dehu
    IEEE ACCESS, 2022, 10 : 72145 - 72154
  • [34] MEASURING THE IMPACT OF DOMAIN FACTORS IN SELF-SUPERVISED PRE-TRAINING
    Sanabria, Ramon
    Wei-Ning, Hsu
    Alexei, Baevski
    Auli, Michael
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [35] Contrastive Self-Supervised Pre-Training for Video Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Jinjian
    Dong, Weisheng
    Shi, Guangming
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 458 - 471
  • [36] MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based Self-Supervised Pre-Training
    Xu, Runsen
    Wang, Tai
    Zhang, Wenwei
    Chen, Runjian
    Cao, Jinkun
    Pang, Jiangmiao
    Lin, Dahua
    arXiv, 2023,
  • [37] GO-MAE: Self-supervised pre-training via masked autoencoder for OCT image classification of gynecology
    Wang, Haoran
    Guo, Xinyu
    Song, Kaiwen
    Sun, Mingyang
    Shao, Yanbin
    Xue, Songfeng
    Zhang, Hongwei
    Zhang, Tianyu
    NEURAL NETWORKS, 2025, 181
  • [38] MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based Self-Supervised Pre-Training
    Xu, Runsen
    Wang, Tai
    Zhang, Wenwei
    Chen, Runjian
    Cao, Jinkun
    Pang, Jiangmiao
    Lin, Dahua
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 13445 - 13454
  • [39] MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based Self-Supervised Pre-Training
    Xu, Runsen
    Wang, Tai
    Zhang, Wenwei
    Chen, Runjian
    Cao, Jinkun
    Pang, Jiangmiao
    Lin, Dahua
    Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2023, 2023-June : 13445 - 13454
  • [40] Token Boosting for Robust Self-Supervised Visual Transformer Pre-training
    Li, Tianjiao
    Foo, Lin Geng
    Hu, Ping
    Shang, Xindi
    Rahmani, Hossein
    Yuan, Zehuan
    Liu, Jun
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24027 - 24038