Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review

被引:0
|
作者
Hassan, Shahab Ul [1 ,2 ]
Abdulkadir, Said Jadid [1 ,3 ]
Zahid, M Soperi Mohd [1 ,2 ]
Al-Selwi, Safwan Mahmood [1 ,3 ]
机构
[1] Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
[2] Centre for Intelligent Signal & Imaging Research (CISIR), Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
[3] Center for Research in Data Science (CeRDaS), Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
关键词
Diagnosis - Medical problems - Medicinal chemistry - Patient treatment;
D O I
10.1016/j.compbiomed.2024.109569
中图分类号
学科分类号
摘要
Background: The interpretability and explainability of machine learning (ML) and artificial intelligence systems are critical for generating trust in their outcomes in fields such as medicine and healthcare. Errors generated by these systems, such as inaccurate diagnoses or treatments, can have serious and even life-threatening effects on patients. Explainable Artificial Intelligence (XAI) is emerging as an increasingly significant area of research nowadays, focusing on the black-box aspect of sophisticated and difficult-to-interpret ML algorithms. XAI techniques such as Local Interpretable Model-Agnostic Explanations (LIME) can give explanations for these models, raising confidence in the systems and improving trust in their predictions. Numerous works have been published that respond to medical problems through the use of ML models in conjunction with XAI algorithms to give interpretability and explainability. The primary objective of the study is to evaluate the performance of the newly emerging LIME techniques within healthcare domains that require more attention in the realm of XAI research. Method: A systematic search was conducted in numerous databases (Scopus, Web of Science, IEEE Xplore, ScienceDirect, MDPI, and PubMed) that identified 1614 peer-reviewed articles published between 2019 and 2023. Results: 52 articles were selected for detailed analysis that showed a growing trend in the application of LIME techniques in healthcare, with significant improvements in the interpretability of ML models used for diagnostic and prognostic purposes. Conclusion: The findings suggest that the integration of XAI techniques, particularly LIME, enhances the transparency and trustworthiness of AI systems in healthcare, thereby potentially improving patient outcomes and fostering greater acceptance of AI-driven solutions among medical professionals. © 2024 The Authors
引用
收藏
相关论文
共 50 条
  • [21] Improving Object Recognition in Crime Scenes via Local Interpretable Model-Agnostic Explanations
    Farhood, Helia
    Saberi, Morteza
    Najafi, Mohammad
    2021 IEEE 25TH INTERNATIONAL ENTERPRISE DISTRIBUTED OBJECT COMPUTING CONFERENCE WORKSHOPS (EDOCW 2021), 2021, : 90 - 94
  • [22] Evaluating Local Interpretable Model-Agnostic Explanations on Clinical Machine Learning Classification Models
    Kumarakulasinghe, Nesaretnam Barr
    Blomberg, Tobias
    Lin, Jintai
    Leao, Alexandra Saraiva
    Papapetrou, Panagiotis
    2020 IEEE 33RD INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS(CBMS 2020), 2020, : 7 - 12
  • [23] Predicting cervical cancer risk probabilities using advanced H20 AutoML and local interpretable model-agnostic explanation techniques
    Prusty, Sashikanta
    Patnaik, Srikanta
    Dash, Sujit Kumar
    Prusty, Sushree Gayatri Priyadarsini
    Rautaray, Jyotirmayee
    Sahoo, Ghanashyam
    PEERJ COMPUTER SCIENCE, 2024, 10
  • [24] CBR-LIME: A Case-Based Reasoning Approach to Provide Specific Local Interpretable Model-Agnostic Explanations
    Recio-Garcia, Juan A.
    Diaz-Agudo, Belen
    Pino-Castilla, Victor
    CASE-BASED REASONING RESEARCH AND DEVELOPMENT, ICCBR 2020, 2020, 12311 : 179 - 194
  • [25] Pleural effusion diagnosis using local interpretable model-agnostic explanations and convolutional neural network
    Nguyen H.T.
    Nguyen C.N.T.
    Phan T.M.N.
    Dao T.C.
    IEIE Transactions on Smart Processing and Computing, 2021, 10 (02): : 101 - 108
  • [26] Applying local interpretable model-agnostic explanations to identify substructures that are responsible for mutagenicity of chemical compounds
    Rosa, Lucca Caiaffa Santos
    Pimentel, Andre Silva
    MOLECULAR SYSTEMS DESIGN & ENGINEERING, 2024, 9 (09): : 920 - 936
  • [27] Interpretable ensemble deep learning model for early detection of Alzheimer's disease using local interpretable model-agnostic explanations
    Aghaei, Atefe
    Moghaddam, Mohsen Ebrahimi
    Malek, Hamed
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2022, 32 (06) : 1889 - 1902
  • [28] Enhancing Visualization and Explainability of Computer Vision Models with Local Interpretable Model-Agnostic Explanations (LIME)
    Hamilton, Nicholas
    Webb, Adam
    Wilder, Matt
    Hendrickson, Ben
    Blanck, Matt
    Nelson, Erin
    Roemer, Wiley
    Havens, Timothy C.
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 604 - 611
  • [29] TS-MULE: Local Interpretable Model-Agnostic Explanations for Time Series Forecast Models
    Schlegel, Udo
    Duy Lam Vo
    Keim, Daniel A.
    Seebacher, Daniel
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I, 2021, 1524 : 5 - 14
  • [30] Feasibility of local interpretable model-agnostic explanations (LIME) algorithm as an effective and interpretable feature selection method: comparative fNIRS study
    Shin, Jaeyoung
    BIOMEDICAL ENGINEERING LETTERS, 2023, 13 (04) : 689 - 703