Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis

被引:23
|
作者
Singh, Amitojdeep [1 ,2 ]
Balaji, Janarthanam Jothi [3 ]
Rasheed, Mohammed Abdul [1 ]
Jayakumar, Varadharajan [1 ]
Raman, Rajiv [4 ]
Lakshminarayanan, Vasudevan [1 ,2 ]
机构
[1] Univ Waterloo, Sch Optometry & Vis Sci, Theoret & Expt Epistemol Lab TEEL, Waterloo, ON, Canada
[2] Univ Waterloo, Dept Syst Design Engn, Waterloo, ON, Canada
[3] Med Res Fdn, Dept Optometry, Chennai, Tamil Nadu, India
[4] Sankara Nethralaya, Shri Bhagwan Mahavir Vitreoretinal Serv, Chennai, Tamil Nadu, India
来源
CLINICAL OPHTHALMOLOGY | 2021年 / 15卷
基金
加拿大自然科学与工程研究理事会;
关键词
explainable AI; deep learning; machine learning; image processing; optical coherence tomography; retina; diabetic macular edema; choroidal neovascularization; drusen;
D O I
10.2147/OPTH.S312236
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Background: The lack of explanations for the decisions made by deep learning algorithms has hampered their acceptance by the clinical community despite highly accurate results on multiple problems. Attribution methods explaining deep learning models have been tested on medical imaging problems. The performance of various attribution methods has been compared for models trained on standard machine learning datasets but not on medical images. In this study, we performed a comparative analysis to determine the method with the best explanations for retinal OCT diagnosis. Methods: A well-known deep learning model, Inception-v3 was trained to diagnose 3 retinal diseases - choroidal neovascularization (CNV), diabetic macular edema (DME), and drusen. The explanations from 13 different attribution methods were rated by a panel of 14 clinicians for clinical significance. Feedback was obtained from the clinicians regarding the current and future scope of such methods. Results: An attribution method based on Taylor series expansion, called Deep Taylor, was rated the highest by clinicians with a median rating of 3.85/5. It was followed by Guided backpropagation (GBP), and SHapley Additive exPlanations (SHAP). Conclusion: Explanations from the top methods were able to highlight the structures for each disease - fluid accumulation for CNV, the boundaries of edema for DME, and bumpy areas of retinal pigment epithelium (RPE) for drusen. The most suitable method for a specific medical diagnosis task may be different from the one considered best for conventional tasks. Overall, there was a high degree of acceptance from the clinicians surveyed in the study.
引用
收藏
页码:2573 / 2581
页数:9
相关论文
共 50 条
  • [31] Differential diagnosis of frontotemporal dementia subtypes with explainable deep learning on structural MRI
    Ma, Da
    Stocks, Jane
    Rosen, Howard
    Kantarci, Kejal
    Lockhart, Samuel N.
    Bateman, James R.
    Craft, Suzanne
    Gurcan, Metin N.
    Popuri, Karteek
    Beg, Mirza Faisal
    Wang, Lei
    FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [32] Interactive Explainable Deep Learning Model Informs Prostate Cancer Diagnosis at MRI
    Hamm, Charlie A.
    Baumgartner, Georg L.
    Biessmann, Felix
    Beetz, Nick L.
    Hartenstein, Alexander
    Savic, Lynn J.
    Frobose, Konrad
    Drager, Franziska
    Schallenberg, Simon
    Rudolph, Madhuri
    Baur, Alexander D. J.
    Hamm, Bernd
    Haas, Matthias
    Hofbauer, Sebastian
    Cash, Hannes
    Penzkofer, Tobias
    RADIOLOGY, 2023, 307 (04)
  • [33] An Explainable Deep Learning Ensemble Model for Robust Diagnosis of Diabetic Retinopathy Grading
    Shorfuzzaman, Mohammad
    Hossain, M. Shamim
    El Saddik, Abdulmotaleb
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (03)
  • [34] Automated Detection of Colorectal Polyp Utilizing Deep Learning Methods With Explainable AI
    Ahamed, Md. Faysal
    Islam, Md. Rabiul
    Nahiduzzaman, Md.
    Karim, Md. Jawadul
    Ayari, Mohamed Arselene
    Khandakar, Amith
    IEEE ACCESS, 2024, 12 : 78074 - 78100
  • [35] Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods
    Abir, Wahidul Hasan
    Khanam, Faria Rahman
    Alam, Kazi Nabiul
    Hadjouni, Myriam
    Elmannai, Hela
    Bourouis, Sami
    Dey, Rajesh
    Khan, Mohammad Monirujjaman
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 35 (02): : 2151 - 2169
  • [36] Evaluation of Visualization Concepts for Explainable Machine Learning Methods in the Context of Manufacturing
    Gerling, Alexander
    Seiffer, Christian
    Ziekow, Holger
    Schreier, Ulf
    Hess, Andreas
    Abdeslam, Djaffar Ould
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON COMPUTER-HUMAN INTERACTION RESEARCH AND APPLICATIONS (CHIRA), 2021, : 189 - 201
  • [37] Explainable machine learning methods and respiratory oscillometry for the diagnosis of respiratory abnormalities in sarcoidosis
    Allan Danilo de Lima
    Agnaldo J. Lopes
    Jorge Luis Machado do Amaral
    Pedro Lopes de Melo
    BMC Medical Informatics and Decision Making, 22
  • [38] Explainable machine learning methods and respiratory oscillometry for the diagnosis of respiratory abnormalities in sarcoidosis
    de Lima, Allan Danilo
    Lopes, Agnaldo J.
    Machado do Amaral, Jorge Luis
    de Melo, Pedro Lopes
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2022, 22 (01)
  • [39] Explainable deep learning in plant phenotyping
    Mostafa, Sakib
    Mondal, Debajyoti
    Panjvani, Karim
    Kochian, Leon
    Stavness, Ian
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [40] Visual Analytics for Explainable Deep Learning
    Choo, Jaegul
    Liu, Shixia
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2018, 38 (04) : 84 - 92