Cross-modality transfer learning with knowledge infusion for diabetic retinopathy grading

被引:0
|
作者
Chen, Tao [1 ,2 ]
Bai, Yanmiao [2 ]
Mao, Haiting [1 ,2 ]
Liu, Shouyue [1 ,2 ]
Xu, Keyi [1 ,2 ]
Xiong, Zhouwei [1 ,2 ]
Ma, Shaodong [2 ]
Yang, Fang [1 ,2 ]
Zhao, Yitian [1 ,2 ]
机构
[1] Wenzhou Med Univ, Cixi Biomed Res Inst, Ningbo, Peoples R China
[2] Chinese Acad Sci, Inst Biomed Engn, Ningbo Inst Mat Technol & Engn, Ningbo, Peoples R China
关键词
ultra-wide-field image; domain adaptation; diabetic retinopathy; lesion segmentation; disease diagnosis; UNSUPERVISED DOMAIN ADAPTATION; NEURAL-NETWORK; IMAGES; SYSTEM; DEEP;
D O I
10.3389/fmed.2024.1400137
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background Ultra-wide-field (UWF) fundus photography represents an emerging retinal imaging technique offering a broader field of view, thus enhancing its utility in screening and diagnosing various eye diseases, notably diabetic retinopathy (DR). However, the application of computer-aided diagnosis for DR using UWF images confronts two major challenges. The first challenge arises from the limited availability of labeled UWF data, making it daunting to train diagnostic models due to the high cost associated with manual annotation of medical images. Secondly, existing models' performance requires enhancement due to the absence of prior knowledge to guide the learning process.Purpose By leveraging extensively annotated datasets within the field, which encompass large-scale, high-quality color fundus image datasets annotated at either image-level or pixel-level, our objective is to transfer knowledge from these datasets to our target domain through unsupervised domain adaptation.Methods Our approach presents a robust model for assessing the severity of diabetic retinopathy (DR) by leveraging unsupervised lesion-aware domain adaptation in ultra-wide-field (UWF) images. Furthermore, to harness the wealth of detailed annotations in publicly available color fundus image datasets, we integrate an adversarial lesion map generator. This generator supplements the grading model by incorporating auxiliary lesion information, drawing inspiration from the clinical methodology of evaluating DR severity by identifying and quantifying associated lesions.Results We conducted both quantitative and qualitative evaluations of our proposed method. In particular, among the six representative DR grading methods, our approach achieved an accuracy (ACC) of 68.18% and a precision (pre) of 67.43%. Additionally, we conducted extensive experiments in ablation studies to validate the effectiveness of each component of our proposed method.Conclusion In conclusion, our method not only improves the accuracy of DR grading, but also enhances the interpretability of the results, providing clinicians with a reliable DR grading scheme.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] IMPLICIT LEARNING - WITHIN-MODALITY AND CROSS-MODALITY TRANSFER OF TACIT KNOWLEDGE
    MANZA, L
    REBER, AS
    BULLETIN OF THE PSYCHONOMIC SOCIETY, 1991, 29 (06) : 499 - 499
  • [2] Dynamic Knowledge Distillation with Cross-Modality Knowledge Transfer
    Wang, Guangzhi
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2974 - 2978
  • [3] Addressing imaging accessibility by cross-modality transfer learning
    Zheng, Zhiyang
    Su, Yi
    Chen, Kewei
    Weidman, David A.
    Wu, Teresa
    Lo, Ben
    Lure, Fleming
    Li, Jing
    MEDICAL IMAGING 2022: COMPUTER-AIDED DIAGNOSIS, 2022, 12033
  • [4] Coral Classification Using DenseNet and Cross-modality Transfer Learning
    Xu, Lian
    Bennamoun, Mohammed
    Boussaid, Farid
    Ana, Senjian
    Sohel, Ferdous
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [5] Cross-Modality Knowledge Transfer for Prostate Segmentation from CT Scans
    Liu, Yucheng
    Khosravan, Naji
    Liu, Yulin
    Stember, Joseph
    Shoag, Jonathan
    Bagci, Ulas
    Jambawalikar, Sachin
    DOMAIN ADAPTATION AND REPRESENTATION TRANSFER AND MEDICAL IMAGE LEARNING WITH LESS LABELS AND IMPERFECT DATA, DART 2019, MIL3ID 2019, 2019, 11795 : 63 - 71
  • [6] Task-Decoupled Knowledge Transfer for Cross-Modality Object Detection
    Wei, Chiheng
    Bai, Lianfa
    Chen, Xiaoyu
    Han, Jing
    ENTROPY, 2023, 25 (08)
  • [7] CROSS-MODALITY TRANSFER OF SPATIAL INFORMATION
    FISHBEIN, HD
    DECKER, J
    WILCOX, P
    BRITISH JOURNAL OF PSYCHOLOGY, 1977, 68 (NOV) : 503 - 508
  • [8] Learning to learn: From within-modality to cross-modality transfer during infancy
    Hupp, Julie M.
    Sloutsky, Vladimir M.
    JOURNAL OF EXPERIMENTAL CHILD PSYCHOLOGY, 2011, 110 (03) : 408 - 421
  • [9] Bridging asymmetry between image and video: Cross-modality knowledge transfer based on learning from video
    Zhou, Bingxin
    Zhou, Jianghao
    Chen, Zhongming
    Li, Ziqiang
    Deng, Long
    Ge, Yongxin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 264
  • [10] Representation Learning for Cross-Modality Classification
    van Tulder, Gijs
    de Bruijne, Marleen
    MEDICAL COMPUTER VISION AND BAYESIAN AND GRAPHICAL MODELS FOR BIOMEDICAL IMAGING, 2017, 10081 : 126 - 136