Cross-modality transfer learning with knowledge infusion for diabetic retinopathy grading

被引:0
|
作者
Chen, Tao [1 ,2 ]
Bai, Yanmiao [2 ]
Mao, Haiting [1 ,2 ]
Liu, Shouyue [1 ,2 ]
Xu, Keyi [1 ,2 ]
Xiong, Zhouwei [1 ,2 ]
Ma, Shaodong [2 ]
Yang, Fang [1 ,2 ]
Zhao, Yitian [1 ,2 ]
机构
[1] Wenzhou Med Univ, Cixi Biomed Res Inst, Ningbo, Peoples R China
[2] Chinese Acad Sci, Inst Biomed Engn, Ningbo Inst Mat Technol & Engn, Ningbo, Peoples R China
关键词
ultra-wide-field image; domain adaptation; diabetic retinopathy; lesion segmentation; disease diagnosis; UNSUPERVISED DOMAIN ADAPTATION; NEURAL-NETWORK; IMAGES; SYSTEM; DEEP;
D O I
10.3389/fmed.2024.1400137
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background Ultra-wide-field (UWF) fundus photography represents an emerging retinal imaging technique offering a broader field of view, thus enhancing its utility in screening and diagnosing various eye diseases, notably diabetic retinopathy (DR). However, the application of computer-aided diagnosis for DR using UWF images confronts two major challenges. The first challenge arises from the limited availability of labeled UWF data, making it daunting to train diagnostic models due to the high cost associated with manual annotation of medical images. Secondly, existing models' performance requires enhancement due to the absence of prior knowledge to guide the learning process.Purpose By leveraging extensively annotated datasets within the field, which encompass large-scale, high-quality color fundus image datasets annotated at either image-level or pixel-level, our objective is to transfer knowledge from these datasets to our target domain through unsupervised domain adaptation.Methods Our approach presents a robust model for assessing the severity of diabetic retinopathy (DR) by leveraging unsupervised lesion-aware domain adaptation in ultra-wide-field (UWF) images. Furthermore, to harness the wealth of detailed annotations in publicly available color fundus image datasets, we integrate an adversarial lesion map generator. This generator supplements the grading model by incorporating auxiliary lesion information, drawing inspiration from the clinical methodology of evaluating DR severity by identifying and quantifying associated lesions.Results We conducted both quantitative and qualitative evaluations of our proposed method. In particular, among the six representative DR grading methods, our approach achieved an accuracy (ACC) of 68.18% and a precision (pre) of 67.43%. Additionally, we conducted extensive experiments in ablation studies to validate the effectiveness of each component of our proposed method.Conclusion In conclusion, our method not only improves the accuracy of DR grading, but also enhances the interpretability of the results, providing clinicians with a reliable DR grading scheme.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Enhancing the Accuracy of an Image Classification Model Using Cross-Modality Transfer Learning
    Liu, Jiaqi
    Chui, Kwok Tai
    Lee, Lap-Kei
    ELECTRONICS, 2023, 12 (15)
  • [22] CROSS-MODALITY MEDICAL IMAGE DETECTION AND SEGMENTATION BY TRANSFER LEARNING OF SHAPE PRIORS
    Zheng, Yefeng
    2015 IEEE 12TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), 2015, : 424 - 427
  • [23] Learning Cross-modality Similarity for Multinomial Data
    Jia, Yangqing
    Salzmann, Mathieu
    Darrell, Trevor
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2011, : 2407 - 2414
  • [24] Cross-modality collaborative learning identified pedestrian
    Xiongjun Wen
    Xin Feng
    Ping Li
    Wenfang Chen
    The Visual Computer, 2023, 39 : 4117 - 4132
  • [25] Representation Learning Through Cross-Modality Supervision
    Sankaran, Nishant
    Mohan, Deen Dayal
    Setlur, Srirangaraj
    Govindaraju, Venugopal
    Fedorishin, Dennis
    2019 14TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2019), 2019, : 107 - 114
  • [26] Cross-Modality Retrieval by Joint Correlation Learning
    Wang, Shuo
    Guo, Dan
    Xu, Xin
    Zhuo, Li
    Wang, Meng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2019, 15 (02)
  • [27] Cross-Domain and Cross-Modality Transfer Learning for Multi-domain and Multi-modality Event Detection
    Yang, Zhenguo
    Cheng, Min
    Li, Qing
    Li, Yukun
    Lin, Zehang
    Liu, Wenyin
    WEB INFORMATION SYSTEMS ENGINEERING, WISE 2017, PT I, 2017, 10569 : 516 - 523
  • [28] Cross-Modality Bridging and Knowledge Transferring for Image Understanding
    Yan, Chenggang
    Li, Liang
    Zhang, Chunjie
    Liu, Bingtao
    Zhang, Yongdong
    Dai, Qionghai
    IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (10) : 2675 - 2685
  • [29] Cross-modality Multiple Relations Learning for Knowledge-based Visual Question Answering
    Wang, Yan
    Li, Peize
    Si, Qingyi
    Zhang, Hanwen
    Zang, Wenyu
    Lin, Zheng
    Fu, Peng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (03)
  • [30] The effects of cross-modality and level of self-regulated learning on knowledge acquisition with smartpads
    Hye Yeon Lee
    Hyeon Woo Lee
    Educational Technology Research and Development, 2018, 66 : 247 - 265