Evaluating Task-Specific Augmentations in Self-Supervised Pre-Training for 3D Medical Image Analysis

被引:1
|
作者
Claessens, C. H. B. [1 ]
Hamm, J. J. M. [2 ]
Viviers, C. G. A. [1 ]
Nederend, J. [3 ]
Grunhagen, D. J. [2 ]
Tanis, P. J. [2 ]
de With, P. H. N. [1 ]
van der Sommen, F. [1 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
[2] Erasmus MC, Rotterdam, Netherlands
[3] Catharina Hosp, Eindhoven, Netherlands
来源
关键词
self-supervised learning; pre-training; medical imaging; three-dimensional; augmentations; self-distillation;
D O I
10.1117/12.3000850
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Self-supervised learning (SSL) has become a crucial approach for pre-training deep learning models in natural and medical image analysis. However, applying transformations designed for natural images to three-dimensional (3D) medical data poses challenges. This study explores the efficacy of specific augmentations in the context of self-supervised pre-training for volumetric medical images. A 3D non-contrastive framework is proposed for in-domain self-supervised pre-training on 3D gray-scale thorax CT data, incorporating four spatial and two intensity augmentations commonly used in 3D medical image analysis. The pre-trained models, adapted versions of ResNet-50 and Vision Transformer (ViT)-S, are evaluated on lung nodule classification and lung tumor segmentation tasks. The results indicate a significant impact of SSL, with a remarkable increase in AUC and DSC as compared to training from scratch. For classification, random scalings and random rotations play a fundamental role in achieving higher downstream performance, while intensity augmentations show limited contribution and may even degrade performance. For segmentation, random intensity histogram shifting enhances robustness, while other augmentations have marginal or negative impacts. These findings underscore the necessity of tailored data augmentations within SSL for medical imaging, emphasizing the importance of task-specific transformations for optimal model performance in complex 3D medical datasets.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Self-Supervised Pre-Training for 3-D Roof Reconstruction on LiDAR Data
    Yang, Hongxin
    Huang, Shangfeng
    Wang, Ruisheng
    Wang, Xin
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21 : 1 - 5
  • [22] Self-Supervised Pre-training for Time Series Classification
    Shi, Pengxiang
    Ye, Wenwen
    Qin, Zheng
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [23] Self-supervised pre-training improves fundus image classification for diabetic retinopathy
    Lee, Joohyung
    Lee, Eung-Joo
    REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2022, 2022, 12102
  • [24] Object Adaptive Self-Supervised Dense Visual Pre-Training
    Zhang, Yu
    Zhang, Tao
    Zhu, Hongyuan
    Chen, Zihan
    Mi, Siya
    Peng, Xi
    Geng, Xin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 2228 - 2240
  • [25] UniVIP: A Unified Framework for Self-Supervised Visual Pre-training
    Li, Zhaowen
    Zhu, Yousong
    Yang, Fan
    Li, Wei
    Zhao, Chaoyang
    Chen, Yingying
    Chen, Zhiyang
    Xie, Jiahao
    Wu, Liwei
    Zhao, Rui
    Tang, Ming
    Wang, Jinqiao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14607 - 14616
  • [26] Reducing Domain mismatch in Self-supervised speech pre-training
    Baskar, Murali Karthick
    Rosenberg, Andrew
    Ramabhadran, Bhuvana
    Zhang, Yu
    INTERSPEECH 2022, 2022, : 3028 - 3032
  • [27] Dense Contrastive Learning for Self-Supervised Visual Pre-Training
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    Li, Lei
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3023 - 3032
  • [28] A Self-Supervised Pre-Training Method for Chinese Spelling Correction
    Su J.
    Yu S.
    Hong X.
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2023, 51 (09): : 90 - 98
  • [29] Self-supervised VICReg pre-training for Brugada ECG detection
    Ronan, Robert
    Tarabanis, Constantine
    Chinitz, Larry
    Jankelson, Lior
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [30] Self-supervised Pre-training for Semantic Segmentation in an Indoor Scene
    Shrestha, Sulabh
    Li, Yimeng
    Kosecka, Jana
    2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024, 2024, : 625 - 635