Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review

被引:0
|
作者
Hassan, Shahab Ul [1 ,2 ]
Abdulkadir, Said Jadid [1 ,3 ]
Zahid, M Soperi Mohd [1 ,2 ]
Al-Selwi, Safwan Mahmood [1 ,3 ]
机构
[1] Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
[2] Centre for Intelligent Signal & Imaging Research (CISIR), Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
[3] Center for Research in Data Science (CeRDaS), Universiti Teknologi PETRONAS, Perak, Seri Iskandar,32610, Malaysia
关键词
Diagnosis - Medical problems - Medicinal chemistry - Patient treatment;
D O I
10.1016/j.compbiomed.2024.109569
中图分类号
学科分类号
摘要
Background: The interpretability and explainability of machine learning (ML) and artificial intelligence systems are critical for generating trust in their outcomes in fields such as medicine and healthcare. Errors generated by these systems, such as inaccurate diagnoses or treatments, can have serious and even life-threatening effects on patients. Explainable Artificial Intelligence (XAI) is emerging as an increasingly significant area of research nowadays, focusing on the black-box aspect of sophisticated and difficult-to-interpret ML algorithms. XAI techniques such as Local Interpretable Model-Agnostic Explanations (LIME) can give explanations for these models, raising confidence in the systems and improving trust in their predictions. Numerous works have been published that respond to medical problems through the use of ML models in conjunction with XAI algorithms to give interpretability and explainability. The primary objective of the study is to evaluate the performance of the newly emerging LIME techniques within healthcare domains that require more attention in the realm of XAI research. Method: A systematic search was conducted in numerous databases (Scopus, Web of Science, IEEE Xplore, ScienceDirect, MDPI, and PubMed) that identified 1614 peer-reviewed articles published between 2019 and 2023. Results: 52 articles were selected for detailed analysis that showed a growing trend in the application of LIME techniques in healthcare, with significant improvements in the interpretability of ML models used for diagnostic and prognostic purposes. Conclusion: The findings suggest that the integration of XAI techniques, particularly LIME, enhances the transparency and trustworthiness of AI systems in healthcare, thereby potentially improving patient outcomes and fostering greater acceptance of AI-driven solutions among medical professionals. © 2024 The Authors
引用
收藏
相关论文
共 50 条
  • [41] Explainable machine learning techniques for hybrid nanofluids transport characteristics: an evaluation of shapley additive and local interpretable model-agnostic explanations
    Kanti, Praveen Kumar
    Sharma, Prabhakar
    Wanatasanappan, V. Vicki
    Said, Nejla Mahjoub
    JOURNAL OF THERMAL ANALYSIS AND CALORIMETRY, 2024, 149 (21) : 11599 - 11618
  • [42] A systematic literature review: exploring the challenges of ensemble model for medical imaging
    Muhamad Rodhi Supriyadi
    Azurah Bte A. Samah
    Jemie Muliadi
    Raja Azman Raja Awang
    Noor Huda Ismail
    Hairudin Abdul Majid
    Mohd Shahizan Bin Othman
    Siti Zaiton Binti Mohd Hashim
    BMC Medical Imaging, 25 (1)
  • [43] Interpretation of Drop Size Predictions from a Random Forest Model Using Local Interpretable Model-Agnostic Explanations (LIME) in a Rotating Disc Contactor
    Prabhu, Hardik
    Sane, Aamod
    Dhadwal, Renu
    Parlikkad, Naren Rajan
    Valadi, Jayaraman Krishnamoorthy
    INDUSTRIAL & ENGINEERING CHEMISTRY RESEARCH, 2023, 62 (45) : 19019 - 19034
  • [44] Explainable machine learning techniques based on attention gate recurrent unit and local interpretable model-agnostic explanations for multivariate wind speed forecasting
    Peng, Lu
    Lv, Sheng-Xiang
    Wang, Lin
    JOURNAL OF FORECASTING, 2024, 43 (06) : 2064 - 2087
  • [45] Interpretative analyses for milling surface roughness prediction in thermally modified timber: Shapley value (SHAP) and local interpretable model-agnostic explanations (LIME)
    Huang, Wenlan
    Jin, Qingyang
    Guo, Xiaolei
    Na, Bin
    WOOD MATERIAL SCIENCE & ENGINEERING, 2025,
  • [46] Detection of COVID-19 findings by the local interpretable model-agnostic explanations method of types-based activations extracted from CNNs
    Togacar, Mesut
    Muzoglu, Nedim
    Ergen, Burhan
    Yarman, Bekir Siddik Binboga
    Halefoglu, Ahmet Mesrur
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 71
  • [47] Data augmentation for medical imaging: A systematic literature review
    Garcea, Fabio
    Serra, Alessio
    Lamberti, Fabrizio
    Morra, Lia
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 152
  • [48] The Accuracy and Faithfullness of AL-DLIME - Active Learning-Based Deterministic Local Interpretable Model-Agnostic Explanations: A Comparison with LIME and DLIME in Medicine
    Holm, Sarah
    Macedo, Luis
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT I, 2023, 1901 : 582 - 605
  • [49] Exploring Local Explanation of Practical Industrial AI Applications: A Systematic Literature Review
    Le, Thi-Thu-Huong
    Prihatno, Aji Teguh
    Oktian, Yustus Eko
    Kang, Hyoeun
    Kim, Howon
    APPLIED SCIENCES-BASEL, 2023, 13 (09):
  • [50] The Accuracy and Faithfullness of AL-DLIME - Active Learning-Based Deterministic Local Interpretable Model-Agnostic Explanations: A Comparison with LIME and DLIME in Medicine
    Center for Informatics and Systems of the University of Coimbra, University of Coimbra, Coimbra, Portugal
    Commun. Comput. Info. Sci., (582-605):