Transparency in Diagnosis: Unveiling the Power of Deep Learning and Explainable AI for Medical Image Interpretation

被引:0
|
作者
Garg, Priya [1 ]
Sharma, M. K. [1 ]
Kumar, Parteek [2 ]
机构
[1] Thapar Inst Engn & Technol, Dept Math, Patiala 147004, Punjab, India
[2] Washington State Univ, Sch Elect Engn & Comp Sci, Pullman, WA USA
关键词
Deep learning; Medical images; Classification; XAI; SEGMENTATION;
D O I
10.1007/s13369-024-09896-5
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Deep learning has revolutionized medical image processing, offering powerful tools for diagnosing critical conditions like brain tumors, skin cancer, and pneumonia. However, the opaque nature of these models necessitates interpretability, especially in healthcare applications where transparency and trust are vital. This study addresses this challenge by integrating explainable artificial intelligence (XAI) techniques, specifically local interpretable model-agnostic explanations (LIME) and gradient-weighted class activation mapping (Grad-CAM), to enhance model transparency. A comparative analysis of five deep learning models-CNN, XceptionNet, EfficientNet, VGG, and ResNet-is conducted across three distinct medical image classification tasks. Performance is evaluated using accuracy, precision, recall, F1 score, specificity, and AUC score, while LIME and Grad-CAM provide visual insights into the regions influencing model predictions, supporting clinical validation. The findings underscore the importance of XAI in improving model transparency, offering valuable insights into the reliability and applicability of deep learning models in medical image analysis, ultimately aiding in the development of more explainable and trustworthy AI systems for healthcare.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Glaucoma Detection Using Explainable AI and Deep Learning
    Afreen N.
    Aluvalu R.
    EAI Endorsed Transactions on Pervasive Health and Technology, 2024, 10
  • [22] Towards transparency in AI: Explainable bird species image classification for ecological research
    Kumar, Samparthi V.S.
    Kondaveeti, Hari Kishan
    Ecological Indicators, 2024, 169
  • [23] A Deep Learning Approach Considering Image Background for Pneumonia Identification Using Explainable AI (XAI)
    Yang, Yuting
    Mei, Gang
    Piccialli, Francesco
    IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2024, 21 (04) : 857 - 868
  • [24] Expresso-AI: An Explainable Video-Based Deep Learning Models for Depression Diagnosis
    Moreno, Felipe
    Alghowinem, Sharifa
    Park, Hae Won
    Breazeal, Cynthia
    2023 11TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, ACII, 2023,
  • [25] Explainable AI Model for COVID-19 Diagnosis Through Joint Deep Learning and Radiomics
    Yang, D.
    Ren, G.
    Ying, M.
    Cai, J.
    MEDICAL PHYSICS, 2021, 48 (06)
  • [26] Guidelines and evaluation of clinical explainable AI in medical image analysis
    Jin, Weina
    Li, Xiaoxiao
    Fatehi, Mostafa
    Hamarneh, Ghassan
    MEDICAL IMAGE ANALYSIS, 2023, 84
  • [27] Multimodal generative AI for medical image interpretation
    Vishwanatha M. Rao
    Michael Hla
    Michael Moor
    Subathra Adithan
    Stephen Kwak
    Eric J. Topol
    Pranav Rajpurkar
    Nature, 2025, 639 (8056) : 888 - 896
  • [28] Guest Editorial Special Issue on Explainable Deep Learning for Medical Image Processing and Analysis
    Zhang, Yu-Dong
    Gorriz, Juan Manuel
    Pan, Yi
    Cordon, Oscar
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (01): : 2 - 4
  • [29] Editorial for special issue on explainable and generalizable deep learning methods for medical image computing
    Wang, Guotai
    Zhang, Shaoting
    Huang, Xiaolei
    Vercauteren, Tom
    Metaxas, Dimitris
    MEDICAL IMAGE ANALYSIS, 2023, 84
  • [30] Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
    van der Velden, Bas H.M.
    Kuijf, Hugo J.
    Gilhuijs, Kenneth G.A.
    Viergever, Max A.
    Medical Image Analysis, 2022, 79