Explainable AI: A review of applications to neuroimaging data

被引:20
|
作者
Farahani, Farzad V. [1 ,2 ]
Fiok, Krzysztof [2 ]
Lahijanian, Behshad [3 ,4 ]
Karwowski, Waldemar [2 ]
Douglas, Pamela K. [5 ]
机构
[1] Johns Hopkins Univ, Dept Biostat, Baltimore, MD 21218 USA
[2] Univ Cent Florida, Dept Ind Engn & Management Syst, Orlando, FL 32816 USA
[3] Univ Florida, Dept Ind & Syst Engn, Gainesville, FL USA
[4] Georgia Inst Technol, H Milton Stewart Sch Ind & Syst Engn, Atlanta, GA USA
[5] Univ Cent Florida, Sch Modeling Simulat & Training, Orlando, FL USA
关键词
explainable AI; interpretability; artificial intelligence (AI); deep learning; neural networks; medical imaging; neuroimaging; SUPPORT VECTOR MACHINE; DEEP NEURAL-NETWORKS; ARTIFICIAL-INTELLIGENCE; FEATURE-SELECTION; CLASSIFICATION; TRANSPARENCY; DISEASES; VISION; IMPACT; CANCER;
D O I
10.3389/fnins.2022.906290
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the "black box " and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a post-hoc capacity. We then focus on reviewing recent applications of post-hoc relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] A review of explainable AI in medical imaging: implications and applications
    Kinger, Shakti
    Kulkarni, Vrushali
    International Journal of Computers and Applications, 2024, 46 (11) : 983 - 997
  • [2] A review of evaluation approaches for explainable AI with applications in cardiology
    Salih, Ahmed M.
    Galazzo, Ilaria Boscolo
    Gkontra, Polyxeni
    Rauseo, Elisa
    Lee, Aaron Mark
    Lekadir, Karim
    Radeva, Petia
    Petersen, Steffen E.
    Menegaz, Gloria
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (09)
  • [3] Explainable AI: Foundations, Applications, Opportunities for Data Management Research
    Pradhan, Romila
    Lahiri, Aditya
    Galhotra, Sainyam
    Salimi, Babak
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 2452 - 2457
  • [4] Explainable AI: Foundations, Applications, Opportunities for Data Management Research
    Pradhan, Romila
    Lahiri, Aditya
    Galhotra, Sainyam
    Salimi, Babak
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 3209 - 3212
  • [5] Explainable AI for Alzheimer Detection: A Review of Current Methods and Applications
    Hasan Saif, Fatima
    Al-Andoli, Mohamed Nasser
    Bejuri, Wan Mohd Yaakob Wan
    APPLIED SCIENCES-BASEL, 2024, 14 (22):
  • [6] Recent Applications of Explainable AI (XAI): A Systematic Literature Review
    Saarela, Mirka
    Podgorelec, Vili
    APPLIED SCIENCES-BASEL, 2024, 14 (19):
  • [7] Extensive Review of Literature on Explainable AI (XAI) in Healthcare Applications
    Mariappan, Ramasamy
    Recent Advances in Computer Science and Communications, 2025, 18 (01)
  • [8] Data Quality and Explainable AI
    Bertossi, Leopoldo
    Geerts, Floris
    ACM JOURNAL OF DATA AND INFORMATION QUALITY, 2020, 12 (02):
  • [9] A review of explainable and interpretable AI with applications in COVID-19 imaging
    Fuhrman, Jordan D.
    Gorre, Naveena
    Hu, Qiyuan
    Li, Hui
    El Naqa, Issam
    Giger, Maryellen L.
    MEDICAL PHYSICS, 2022, 49 (01) : 1 - 14
  • [10] Human-centered evaluation of explainable AI applications: a systematic review
    Kim, Jenia
    Maathuis, Henry
    Sent, Danielle
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7