Cross-modality transfer learning with knowledge infusion for diabetic retinopathy grading

被引:0
|
作者
Chen, Tao [1 ,2 ]
Bai, Yanmiao [2 ]
Mao, Haiting [1 ,2 ]
Liu, Shouyue [1 ,2 ]
Xu, Keyi [1 ,2 ]
Xiong, Zhouwei [1 ,2 ]
Ma, Shaodong [2 ]
Yang, Fang [1 ,2 ]
Zhao, Yitian [1 ,2 ]
机构
[1] Wenzhou Med Univ, Cixi Biomed Res Inst, Ningbo, Peoples R China
[2] Chinese Acad Sci, Inst Biomed Engn, Ningbo Inst Mat Technol & Engn, Ningbo, Peoples R China
关键词
ultra-wide-field image; domain adaptation; diabetic retinopathy; lesion segmentation; disease diagnosis; UNSUPERVISED DOMAIN ADAPTATION; NEURAL-NETWORK; IMAGES; SYSTEM; DEEP;
D O I
10.3389/fmed.2024.1400137
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background Ultra-wide-field (UWF) fundus photography represents an emerging retinal imaging technique offering a broader field of view, thus enhancing its utility in screening and diagnosing various eye diseases, notably diabetic retinopathy (DR). However, the application of computer-aided diagnosis for DR using UWF images confronts two major challenges. The first challenge arises from the limited availability of labeled UWF data, making it daunting to train diagnostic models due to the high cost associated with manual annotation of medical images. Secondly, existing models' performance requires enhancement due to the absence of prior knowledge to guide the learning process.Purpose By leveraging extensively annotated datasets within the field, which encompass large-scale, high-quality color fundus image datasets annotated at either image-level or pixel-level, our objective is to transfer knowledge from these datasets to our target domain through unsupervised domain adaptation.Methods Our approach presents a robust model for assessing the severity of diabetic retinopathy (DR) by leveraging unsupervised lesion-aware domain adaptation in ultra-wide-field (UWF) images. Furthermore, to harness the wealth of detailed annotations in publicly available color fundus image datasets, we integrate an adversarial lesion map generator. This generator supplements the grading model by incorporating auxiliary lesion information, drawing inspiration from the clinical methodology of evaluating DR severity by identifying and quantifying associated lesions.Results We conducted both quantitative and qualitative evaluations of our proposed method. In particular, among the six representative DR grading methods, our approach achieved an accuracy (ACC) of 68.18% and a precision (pre) of 67.43%. Additionally, we conducted extensive experiments in ablation studies to validate the effectiveness of each component of our proposed method.Conclusion In conclusion, our method not only improves the accuracy of DR grading, but also enhances the interpretability of the results, providing clinicians with a reliable DR grading scheme.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] The effects of cross-modality and level of self-regulated learning on knowledge acquisition with smartpads
    Lee, Hye Yeon
    Lee, Hyeon Woo
    ETR&D-EDUCATIONAL TECHNOLOGY RESEARCH AND DEVELOPMENT, 2018, 66 (02): : 247 - 265
  • [32] Semi-Supervised Cross-Modality Action Recognition by Latent Tensor Transfer Learning
    Jia, Chengcheng
    Ding, Zhengming
    Kong, Yu
    Fu, Yun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (09) : 2801 - 2814
  • [33] Transfer learning based robust automatic detection system for diabetic retinopathy grading
    Charu Bhardwaj
    Shruti Jain
    Meenakshi Sood
    Neural Computing and Applications, 2021, 33 : 13999 - 14019
  • [34] Transfer learning based robust automatic detection system for diabetic retinopathy grading
    Bhardwaj, Charu
    Jain, Shruti
    Sood, Meenakshi
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (20): : 13999 - 14019
  • [35] Learning cross-modality features for image caption generation
    Zeng, Chao
    Kwong, Sam
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (07) : 2059 - 2070
  • [36] Incremental Cross-Modality Deep Learning for Pedestrian Recognition
    Pop, Danut Ovidiu
    Rogozan, Alexandrina
    Nashashibi, Fawzi
    Bensrhair, Abdelaziz
    2017 28TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV 2017), 2017, : 523 - 528
  • [37] Cross-Modality Contrastive Learning for Hyperspectral Image Classification
    Hang, Renlong
    Qian, Xuwei
    Liu, Qingshan
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [38] Cross-Modality Feature Learning via Convolutional Autoencoder
    Liu, Xueliang
    Wang, Meng
    Zha, Zheng-Jun
    Hong, Richang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2019, 15 (01)
  • [39] Learning cross-modality features for image caption generation
    Chao Zeng
    Sam Kwong
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 2059 - 2070
  • [40] Cross-modality effect in implicit learning of temporal sequence
    Feng, Zhengning
    Zhu, Sijia
    Duan, Jipeng
    Lu, Yang
    Li, Lin
    CURRENT PSYCHOLOGY, 2023, 42 (36) : 32125 - 32133