Self-supervised learning for chest computed tomography: training strategies and effect on downstream applications

被引:0
|
作者
Tariq, Amara [1 ]
Ramasamy, Gokul [1 ]
Patel, Bhavik [1 ,2 ,3 ]
Banerjee, Imon [1 ,2 ,3 ,4 ]
机构
[1] Mayo Clin Arizona, Arizona Adv AI Hub, Phoenix, AZ 85054 USA
[2] Mayo Clin Arizona, Dept Radiol, Phoenix, AZ USA
[3] Arizona State Univ, Sch Comp & Augmented Intelligence, Tempe, AZ USA
[4] Mayo Clin, Dept Artificial Intelligence & Informat, Scottsdale, AZ USA
关键词
biomedical imaging; computed tomography; image processing; self-supervised learning;
D O I
10.1117/1.JMI.11.6.064003
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose Self-supervised pre-training can reduce the amount of labeled training data needed by pre-learning fundamental visual characteristics of the medical imaging data. We investigate several self-supervised training strategies for chest computed tomography exams and their effects on downstream applications. Approach We benchmark five well-known self-supervision strategies (masked image region prediction, next slice prediction, rotation prediction, flip prediction, and denoising) on 15 M chest computed tomography (CT) slices collected from four sites of the Mayo Clinic enterprise, United States. These models were evaluated for two downstream tasks on public datasets: pulmonary embolism (PE) detection (classification) and lung nodule segmentation. Image embeddings generated by these models were also evaluated for prediction of patient age, race, and gender to study inherent biases in models' understanding of chest CT exams. Results The use of pre-training weights especially masked region prediction-based weights, improved performance, and reduced computational effort needed for downstream tasks compared with task-specific state-of-the-art (SOTA) models. Performance improvement for PE detection was observed for training dataset sizes as large as similar to 380 K with a maximum gain of 5% over SOTA. The segmentation model initialized with pre-training weights learned twice as fast as the randomly initialized model. While gender and age predictors built using self-supervised training weights showed no performance improvement over randomly initialized predictors, the race predictor experienced a 10% performance boost when using self-supervised training weights. Conclusion We released self-supervised models and weights under an open-source academic license. These models can then be fine-tuned with limited task-specific annotated data for a variety of downstream imaging tasks, thus accelerating research in biomedical imaging informatics.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Learning online visual invariances for novel objects via supervised and self-supervised training
    Biscione, Valerio
    Bowers, Jeffrey S.
    NEURAL NETWORKS, 2022, 150 : 222 - 236
  • [32] A New Self-supervised Method for Supervised Learning
    Yang, Yuhang
    Ding, Zilin
    Cheng, Xuan
    Wang, Xiaomin
    Liu, Ming
    INTERNATIONAL CONFERENCE ON COMPUTER VISION, APPLICATION, AND DESIGN (CVAD 2021), 2021, 12155
  • [33] Enriching Chest Radiography Representations: Self-Supervised Learning With a Recalibrating and Importance Scaling
    Kong, Heesan
    Kim, Donghee
    Kim, Kwangsu
    IEEE ACCESS, 2023, 11 : 108697 - 108704
  • [34] Applying Self-Supervised Learning to Image Quality Assessment in Chest CT Imaging
    Pouget, Eleonore
    Dedieu, Veronique
    BIOENGINEERING-BASEL, 2024, 11 (04):
  • [35] Anatomy-aware self-supervised learning for anomaly detection in chest radiographs
    Sato, Junya
    Suzuki, Yuki
    Wataya, Tomohiro
    Nishigaki, Daiki
    Tomiyama, Kazuki
    Yamagata, Kazuki
    Tomiyama, Noriyuki
    Kido, Shoji
    Kita, Kosuke
    ISCIENCE, 2023, 26 (07)
  • [36] Self-supervised Metric Learning in Multi-View Data: A Downstream Task Perspective
    Wang, Shulei
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2023, 118 (544) : 2454 - 2467
  • [37] Research on Personalized AEB Strategies Based on Self-Supervised Contrastive Learning
    Li, Haotian
    Jin, Hui
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (02) : 1303 - 1316
  • [38] Negative sampling strategies for contrastive self-supervised learning of graph representations
    Hafidi, Hakim
    Ghogho, Mounir
    Ciblat, Philippe
    Swami, Ananthram
    SIGNAL PROCESSING, 2022, 190
  • [39] EVALUATING SELF-SUPERVISED LEARNING METHODS FOR DOWNSTREAM CLASSIFICATION OF NEOPLASIA IN BARRETT'S ESOPHAGUS
    Cornelissen, S.
    van der Putten, J. A.
    Boers, T. G. W.
    Jukema, J. B.
    Fockens, K. N.
    Bergman, J. J. G. H. M.
    van der Sommen, F.
    de With, P. H. N.
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 66 - 70
  • [40] Design, Training, and Applications of Foundation Model for Chest Computed Tomography Volumes
    Tariq, Amara
    Patel, Bhavik N.
    Banerjee, Imon
    MEDICAL IMAGING 2024: IMAGE PROCESSING, 2024, 12926