Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis

被引:23
|
作者
Singh, Amitojdeep [1 ,2 ]
Balaji, Janarthanam Jothi [3 ]
Rasheed, Mohammed Abdul [1 ]
Jayakumar, Varadharajan [1 ]
Raman, Rajiv [4 ]
Lakshminarayanan, Vasudevan [1 ,2 ]
机构
[1] Univ Waterloo, Sch Optometry & Vis Sci, Theoret & Expt Epistemol Lab TEEL, Waterloo, ON, Canada
[2] Univ Waterloo, Dept Syst Design Engn, Waterloo, ON, Canada
[3] Med Res Fdn, Dept Optometry, Chennai, Tamil Nadu, India
[4] Sankara Nethralaya, Shri Bhagwan Mahavir Vitreoretinal Serv, Chennai, Tamil Nadu, India
来源
CLINICAL OPHTHALMOLOGY | 2021年 / 15卷
基金
加拿大自然科学与工程研究理事会;
关键词
explainable AI; deep learning; machine learning; image processing; optical coherence tomography; retina; diabetic macular edema; choroidal neovascularization; drusen;
D O I
10.2147/OPTH.S312236
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Background: The lack of explanations for the decisions made by deep learning algorithms has hampered their acceptance by the clinical community despite highly accurate results on multiple problems. Attribution methods explaining deep learning models have been tested on medical imaging problems. The performance of various attribution methods has been compared for models trained on standard machine learning datasets but not on medical images. In this study, we performed a comparative analysis to determine the method with the best explanations for retinal OCT diagnosis. Methods: A well-known deep learning model, Inception-v3 was trained to diagnose 3 retinal diseases - choroidal neovascularization (CNV), diabetic macular edema (DME), and drusen. The explanations from 13 different attribution methods were rated by a panel of 14 clinicians for clinical significance. Feedback was obtained from the clinicians regarding the current and future scope of such methods. Results: An attribution method based on Taylor series expansion, called Deep Taylor, was rated the highest by clinicians with a median rating of 3.85/5. It was followed by Guided backpropagation (GBP), and SHapley Additive exPlanations (SHAP). Conclusion: Explanations from the top methods were able to highlight the structures for each disease - fluid accumulation for CNV, the boundaries of edema for DME, and bumpy areas of retinal pigment epithelium (RPE) for drusen. The most suitable method for a specific medical diagnosis task may be different from the one considered best for conventional tasks. Overall, there was a high degree of acceptance from the clinicians surveyed in the study.
引用
收藏
页码:2573 / 2581
页数:9
相关论文
共 50 条
  • [1] Explainable Deep Learning Models on the Diagnosis of Pneumonia
    Yang, Yuting
    Mei, Gang
    Piccialli, Francesco
    2021 IEEE/ACM CONFERENCE ON CONNECTED HEALTH: APPLICATIONS, SYSTEMS AND ENGINEERING TECHNOLOGIES (CHASE 2021), 2021, : 134 - 138
  • [2] Colon cancer diagnosis by means of explainable deep learning
    Di Giammarco, Marcello
    Martinelli, Fabio
    Santone, Antonella
    Cesarelli, Mario
    Mercaldo, Francesco
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [3] Building Explainable Histology Models with Deep Learning Methods
    Srivastava, Arunima
    Kulkarni, Chaitanya
    Parwani, Anil
    Machiraju, Raghu
    LABORATORY INVESTIGATION, 2019, 99
  • [4] Building Explainable Histology Models with Deep Learning Methods
    Srivastava, Arunima
    Kulkarni, Chaitanya
    Parwani, Anil
    Machiraju, Raghu
    MODERN PATHOLOGY, 2019, 32
  • [5] Explainable Heart Disease Diagnosis with Supervised Learning Methods
    Assegie, Tsehay Admassu
    Sushma, S. J.
    Mamanazarovna, Shonazarova Shakhnoza
    ADCAIJ-ADVANCES IN DISTRIBUTED COMPUTING AND ARTIFICIAL INTELLIGENCE JOURNAL, 2023, 12 (01):
  • [6] Explainable deep learning for diabetes diagnosis with DeepNetX2
    Tanim, Sharia Arfin
    Aurnob, Al Rafi
    Shrestha, Tahmid Enam
    Emon, M. D. Rokon Islam
    Mridha, M. F.
    Miah, Md Saef Ullah
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 99
  • [7] Explainable Methods for Image-Based Deep Learning: A Review
    Lav Kumar Gupta
    Deepika Koundal
    Shweta Mongia
    Archives of Computational Methods in Engineering, 2023, 30 : 2651 - 2666
  • [8] Explainable Deep Learning Methods in Medical Image Classification: A Survey
    Patricio, Cristiano
    Neves, Joao C.
    Lincs, Nova
    Teixeira, Luis F.
    ACM COMPUTING SURVEYS, 2024, 56 (04)
  • [9] Deep Learning Methods With the Improved Attention for Explainable Image Recognition
    Bai, Na
    Joe, Inwhee
    IEEE ACCESS, 2024, 12 : 70559 - 70567
  • [10] EXPLAINABLE ANALYSIS OF DEEP LEARNING METHODS FOR SAR IMAGE CLASSIFICATION
    Su, Shenghan
    Cui, Ziteng
    Guo, Weiwei
    Zhang, Zenghui
    Yu, Wenxian
    2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, : 2570 - 2573