Fine-tuning pre-trained neural networks for medical image classification in small clinical datasets

被引:10
|
作者
Spolaor, Newton [1 ]
Lee, Huei Diana [1 ]
Mendes, Ana Isabel [2 ]
Nogueira, Conceicao Veloso [2 ,3 ]
Sabino Parmezan, Antonio Rafael [1 ,4 ]
Resende Takaki, Weber Shoity [1 ]
Rodrigues Coy, Claudio Saddy [5 ]
Wu, Feng Chung [1 ,5 ]
Fonseca-Pinto, Rui [2 ,6 ,7 ]
机构
[1] Western Parana State Univ UNIOESTE, Lab Bioinformat, Presidente Tancredo Neves Ave 6731, BR-85867900 Foz Do Iguacu, Parana, Brazil
[2] Polytech Inst Leiria, Gen Norton Matos St 4133, P-2411901 Leiria, Portugal
[3] Univ Minho, Ctr Math, Braga, Portugal
[4] Univ Sao Paulo, Inst Math & Comp Sci, Lab Computat Intelligence, Sao Carlos, SP, Brazil
[5] Univ Estadual Campinas, Fac Med Sci, Serv Coloproctol, Campinas, SP, Brazil
[6] Polytech Inst Leiria, CiTechCare Ctr Innovat Care & Hlth Technol, Leiria, Portugal
[7] IT Inst Telecomunicacoes Leiria, Leiria, Portugal
关键词
Feature learning; Few-shot learning; RMSprop; Shallow learning; Statistical test; VGG; MELANOMA; THICKNESS; FEATURES; TEXTURE;
D O I
10.1007/s11042-023-16529-w
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional neural networks have been effective in several applications, arising as a promising supporting tool in a relevant Dermatology problem: skin cancer diagnosis. However, generalizing well can be difficult when little training data is available. The fine-tuning transfer learning strategy has been employed to differentiate properly malignant from non-malignant lesions in dermoscopic images. Fine-tuning a pre-trained network allows one to classify data in the target domain, occasionally with few images, using knowledge acquired in another domain. This work proposes eight fine-tuning settings based on convolutional networks previously trained on ImageNet that can be employed mainly in limited data samples to reduce overfitting risk. They differ on the architecture, the learning rate and the number of unfrozen layer blocks. We evaluated the settings in two public datasets with 104 and 200 dermoscopic images. By finding competitive configurations in small datasets, this paper illustrates that deep learning can be effective if one has only a few dozen malignant and non-malignant lesion images to study and differentiate in Dermatology. The proposal is also flexible and potentially useful for other domains. In fact, it performed satisfactorily in an assessment conducted in a larger dataset with 746 computerized tomographic images associated with the coronavirus disease.
引用
收藏
页码:27305 / 27329
页数:25
相关论文
共 50 条
  • [41] Efficient Fine-Tuning for Low-Resource Tibetan Pre-trained Language Models
    Zhou, Mingjun
    Daiqing, Zhuoma
    Qun, Nuo
    Nyima, Tashi
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VII, 2024, 15022 : 410 - 422
  • [42] Fine-Tuning BERT-Based Pre-Trained Models for Arabic Dependency Parsing
    Al-Ghamdi, Sharefah
    Al-Khalifa, Hend
    Al-Salman, Abdulmalik
    APPLIED SCIENCES-BASEL, 2023, 13 (07):
  • [43] Fine-Tuning Pre-Trained Model for Consumer Fraud Detection from Consumer Reviews
    Tang, Xingli
    Li, Keqi
    Huang, Liting
    Zhou, Hui
    Ye, Chunyang
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2023, PT II, 2023, 14147 : 451 - 456
  • [44] Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction
    Alt, Christoph
    Huebner, Marc
    Hennig, Leonhard
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 1388 - 1398
  • [45] Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompts
    Jiang, Gangwei
    Jiang, Caigao
    Xue, Sigiao
    Zhang, James Y.
    Zhou, Jun
    Lian, Defu
    Wei, Ying
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 12081 - 12095
  • [46] Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models
    Zhou, Kun
    Zhao, Wayne Xin
    Wang, Sirui
    Zhang, Fuzheng
    Wu, Wei
    We, Ji-Rong
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3875 - 3887
  • [47] Sentiment Analysis Using Pre-Trained Language Model With No Fine-Tuning and Less Resource
    Kit, Yuheng
    Mokji, Musa Mohd
    IEEE ACCESS, 2022, 10 : 107056 - 107065
  • [48] Disfluencies and Fine-Tuning Pre-trained Language Models for Detection of Alzheimer's Disease
    Yuan, Jiahong
    Bian, Yuchen
    Cai, Xingyu
    Huang, Jiaji
    Ye, Zheng
    Church, Kenneth
    INTERSPEECH 2020, 2020, : 2162 - 2166
  • [49] Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
    Liao, Baohao
    Tan, Shaomu
    Monz, Christof
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] Towards Efficient Fine-Tuning of Pre-trained Code Models: An Experimental Study and Beyond
    Shi, Ensheng
    Wang, Yanlin
    Zhang, Hongyu
    Du, Lun
    Han, Shi
    Zhang, Dongmei
    Sun, Hongbin
    PROCEEDINGS OF THE 32ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2023, 2023, : 39 - 51