Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review

被引:0
|
作者
Hassan, Shahab Ul [1 ,2 ]
Abdulkadir, Said Jadid [1 ,3 ]
Zahid, M Soperi Mohd [1 ,2 ]
Al-Selwi, Safwan Mahmood [1 ,3 ]
机构
[1] Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
[2] Centre for Intelligent Signal & Imaging Research (CISIR), Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
[3] Center for Research in Data Science (CeRDaS), Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
关键词
Diagnosis - Medical problems - Medicinal chemistry - Patient treatment;
D O I
10.1016/j.compbiomed.2024.109569
中图分类号
学科分类号
摘要
Background: The interpretability and explainability of machine learning (ML) and artificial intelligence systems are critical for generating trust in their outcomes in fields such as medicine and healthcare. Errors generated by these systems, such as inaccurate diagnoses or treatments, can have serious and even life-threatening effects on patients. Explainable Artificial Intelligence (XAI) is emerging as an increasingly significant area of research nowadays, focusing on the black-box aspect of sophisticated and difficult-to-interpret ML algorithms. XAI techniques such as Local Interpretable Model-Agnostic Explanations (LIME) can give explanations for these models, raising confidence in the systems and improving trust in their predictions. Numerous works have been published that respond to medical problems through the use of ML models in conjunction with XAI algorithms to give interpretability and explainability. The primary objective of the study is to evaluate the performance of the newly emerging LIME techniques within healthcare domains that require more attention in the realm of XAI research. Method: A systematic search was conducted in numerous databases (Scopus, Web of Science, IEEE Xplore, ScienceDirect, MDPI, and PubMed) that identified 1614 peer-reviewed articles published between 2019 and 2023. Results: 52 articles were selected for detailed analysis that showed a growing trend in the application of LIME techniques in healthcare, with significant improvements in the interpretability of ML models used for diagnostic and prognostic purposes. Conclusion: The findings suggest that the integration of XAI techniques, particularly LIME, enhances the transparency and trustworthiness of AI systems in healthcare, thereby potentially improving patient outcomes and fostering greater acceptance of AI-driven solutions among medical professionals. © 2024 The Authors
引用
收藏
相关论文
共 50 条
  • [11] A novel dataset and local interpretable model-agnostic explanations (LIME) for monkeypox prediction
    Sharma, Nonita
    Mohanty, Sachi Nandan
    Mahato, Shalini
    Pattanaik, Chinmaya Ranjan
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2023, 17 (04): : 1297 - 1308
  • [12] Sharpening Local Interpretable Model-Agnostic Explanations for Histopathology: Improved Understandability and Reliability
    Graziani, Mara
    de Sousa, Iam Palatnik
    Vellasco, Marley M. B. R.
    da Silva, Eduardo Costa
    Muller, Henning
    Andrearczyk, Vincent
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT III, 2021, 12903 : 540 - 549
  • [13] Deep Learning Explainability with Local Interpretable Model-Agnostic Explanations for Monkeypox Prediction
    Angmo, Motup
    Sharma, Nonita
    Mohanty, Sachi Nandan
    Ijaz Khan, M.
    Mamatov, Abdugafur
    Kallel, Mohamed
    JOURNAL OF MECHANICS IN MEDICINE AND BIOLOGY, 2025,
  • [14] Explanation-driven Self-adaptation using Model-agnostic Interpretable Machine Learning
    Negri, Francesco Renato
    Nicolosi, Niccolo
    Camilli, Matteo
    Mirandola, Raffaela
    PROCEEDINGS OF THE 2024 IEEE/ACM 19TH SYMPOSIUM ON SOFTWARE ENGINEERING FOR ADAPTIVE AND SELF-MANAGING SYSTEMS, SEAMS 2024, 2024, : 189 - 199
  • [15] Multi-scale Local Explanation Approach for Image Analysis Using Model-Agnostic Explainable Artificial Intelligence (XAI)
    Hajiyan, Hooria
    Ebrahimi, Mehran
    MEDICAL IMAGING 2023, 2023, 12471
  • [16] Development of a classification model for Cynanchum wilfordii and Cynanchum auriculatum using convolutional neural network and local interpretable model-agnostic explanation technology
    Jung, Dae-Hyun
    Kim, Ho-Youn
    Won, Jae Hee
    Park, Soo Hyun
    FRONTIERS IN PLANT SCIENCE, 2023, 14
  • [17] Predicting households' residential mobility trajectories with geographically localized interpretable model-agnostic explanation (GLIME)
    Jin, Chanwoo
    Park, Sohyun
    Ha, Hui Jeong
    Lee, Jinhyung
    Kim, Junghwan
    Hutchenreuther, Johan
    Nara, Atsushi
    INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE, 2023, 37 (12) : 2597 - 2619
  • [18] Constructing Interpretable Belief Rule Bases Using a Model-Agnostic Statistical Approach
    Sun, Chao
    Wang, Yinghui
    Yan, Tao
    Yang, Jinlong
    Huang, Liangyi
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2024, 32 (09) : 5163 - 5175
  • [19] Model-agnostic local explanation: Multi-objective genetic algorithm explainer
    Nematzadeh, Hossein
    Garcia-Nieto, Jose
    Hurtado, Sandro
    Aldana-Montes, Jose F.
    Navas-Delgado, Ismael
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 139
  • [20] ILIME: Local and Global Interpretable Model-Agnostic Explainer of Black-Box Decision
    ElShawi, Radwa
    Sherif, Youssef
    Al-Mallah, Mouaz
    Sakr, Sherif
    ADVANCES IN DATABASES AND INFORMATION SYSTEMS, ADBIS 2019, 2019, 11695 : 53 - 68