Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review

被引:0
|
作者
Hassan, Shahab Ul [1 ,2 ]
Abdulkadir, Said Jadid [1 ,3 ]
Zahid, M Soperi Mohd [1 ,2 ]
Al-Selwi, Safwan Mahmood [1 ,3 ]
机构
[1] Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
[2] Centre for Intelligent Signal & Imaging Research (CISIR), Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
[3] Center for Research in Data Science (CeRDaS), Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
关键词
Diagnosis - Medical problems - Medicinal chemistry - Patient treatment;
D O I
10.1016/j.compbiomed.2024.109569
中图分类号
学科分类号
摘要
Background: The interpretability and explainability of machine learning (ML) and artificial intelligence systems are critical for generating trust in their outcomes in fields such as medicine and healthcare. Errors generated by these systems, such as inaccurate diagnoses or treatments, can have serious and even life-threatening effects on patients. Explainable Artificial Intelligence (XAI) is emerging as an increasingly significant area of research nowadays, focusing on the black-box aspect of sophisticated and difficult-to-interpret ML algorithms. XAI techniques such as Local Interpretable Model-Agnostic Explanations (LIME) can give explanations for these models, raising confidence in the systems and improving trust in their predictions. Numerous works have been published that respond to medical problems through the use of ML models in conjunction with XAI algorithms to give interpretability and explainability. The primary objective of the study is to evaluate the performance of the newly emerging LIME techniques within healthcare domains that require more attention in the realm of XAI research. Method: A systematic search was conducted in numerous databases (Scopus, Web of Science, IEEE Xplore, ScienceDirect, MDPI, and PubMed) that identified 1614 peer-reviewed articles published between 2019 and 2023. Results: 52 articles were selected for detailed analysis that showed a growing trend in the application of LIME techniques in healthcare, with significant improvements in the interpretability of ML models used for diagnostic and prognostic purposes. Conclusion: The findings suggest that the integration of XAI techniques, particularly LIME, enhances the transparency and trustworthiness of AI systems in healthcare, thereby potentially improving patient outcomes and fostering greater acceptance of AI-driven solutions among medical professionals. © 2024 The Authors
引用
收藏
相关论文
共 50 条
  • [31] Feasibility of local interpretable model-agnostic explanations (LIME) algorithm as an effective and interpretable feature selection method: comparative fNIRS study
    Jaeyoung Shin
    Biomedical Engineering Letters, 2023, 13 : 689 - 703
  • [32] Generating structural alerts from toxicology datasets using the local interpretable model-agnostic explanations method
    Nascimento, Cayque Monteiro Castro
    Moura, Paloma Guimaraes
    Pimentel, Andre Silva
    DIGITAL DISCOVERY, 2023, 2 (05): : 1311 - 1325
  • [33] Local interpretable model-agnostic explanations guided brain magnetic resonance imaging classification for identifying attention deficit hyperactivity disorder subtypes
    K. Usha Rupni
    P. Aruna Priya
    Journal of Ambient Intelligence and Humanized Computing, 2025, 16 (2) : 361 - 374
  • [34] Foreign direct investment and local interpretable model-agnostic explanations: a rational framework for FDI decision making
    Singh, Devesh
    JOURNAL OF ECONOMICS FINANCE AND ADMINISTRATIVE SCIENCE, 2024, 29 (57): : 98 - 120
  • [35] Trapezoidal Step Scheduler for Model-Agnostic Meta-Learning in Medical Imaging
    Voon, Wingates
    Hum, Yan Chai
    Tee, Yee Kai
    Yap, Wun-She
    Lai, Khin Wee
    Nisar, Humaira
    Mokayed, Hamam
    PATTERN RECOGNITION, 2025, 161
  • [36] Investigating Black-Box Model for Wind Power Forecasting Using Local Interpretable Model-Agnostic Explanations Algorithm
    Yang, Mao
    Xu, Chuanyu
    Bai, Yuying
    Ma, Miaomiao
    Su, Xin
    CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2025, 11 (01): : 227 - 242
  • [37] Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation Using Eye-tracking Technology
    Martinez, Miguel Angel Meza
    Nadj, Mario
    Langner, Moritz
    Toreini, Peyman
    Maedche, Alexander
    ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2023, 13 (04)
  • [38] Prediction of Acute Kidney Injury in Cardiac Surgery Patients: Interpretation using Local Interpretable Model-agnostic Explanations
    da Cruz, Harry Freitas
    Schneider, Frederic
    Schapranow, Matthieu-P
    HEALTHINF: PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES - VOL 5: HEALTHINF, 2019, : 380 - 387
  • [39] A Multiobjective Genetic Algorithm to Evolving Local Interpretable Model-Agnostic Explanations for Deep Neural Networks in Image Classification
    Wang, Bin
    Pei, Wenbin
    Xue, Bing
    Zhang, Mengjie
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2024, 28 (04) : 903 - 917
  • [40] Early knee osteoarthritis classification using distributed explainable convolutional neural network with local interpretable model-agnostic explanations
    Kumar, M. Ganesh
    Gumma, Lakshmi Narayana
    Neelam, Saikiran
    Yaswanth, Narikamalli
    Yedukondalu, Jammisetty
    Engineering Research Express, 2024, 6 (04):