Explainable AI: A review of applications to neuroimaging data

被引:20
|
作者
Farahani, Farzad V. [1 ,2 ]
Fiok, Krzysztof [2 ]
Lahijanian, Behshad [3 ,4 ]
Karwowski, Waldemar [2 ]
Douglas, Pamela K. [5 ]
机构
[1] Johns Hopkins Univ, Dept Biostat, Baltimore, MD 21218 USA
[2] Univ Cent Florida, Dept Ind Engn & Management Syst, Orlando, FL 32816 USA
[3] Univ Florida, Dept Ind & Syst Engn, Gainesville, FL USA
[4] Georgia Inst Technol, H Milton Stewart Sch Ind & Syst Engn, Atlanta, GA USA
[5] Univ Cent Florida, Sch Modeling Simulat & Training, Orlando, FL USA
关键词
explainable AI; interpretability; artificial intelligence (AI); deep learning; neural networks; medical imaging; neuroimaging; SUPPORT VECTOR MACHINE; DEEP NEURAL-NETWORKS; ARTIFICIAL-INTELLIGENCE; FEATURE-SELECTION; CLASSIFICATION; TRANSPARENCY; DISEASES; VISION; IMPACT; CANCER;
D O I
10.3389/fnins.2022.906290
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the "black box " and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a post-hoc capacity. We then focus on reviewing recent applications of post-hoc relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls.
引用
收藏
页数:26
相关论文
共 50 条
  • [41] On Quantifying Literals in Boolean Logic and Its Applications to Explainable AI
    Darwiche, Adnan
    Marquis, Pierre
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2021, 72 : 285 - 328
  • [42] On quantifying literals in boolean logic and its applications to explainable AI
    Darwiche A.
    Marquis P.
    Journal of Artificial Intelligence Research, 2021, 72 : 285 - 328
  • [43] Explainable AI approaches in deep learning: Advancements, applications and challenges
    Hosain, Md. Tanzib
    Jim, Jamin Rahman
    Mridha, M. F.
    Kabir, Md Mohsin
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 117
  • [44] Survey on Explainable AI: From Approaches, Limitations and Applications Aspects
    Wenli Yang
    Yuchen Wei
    Hanyu Wei
    Yanyu Chen
    Guan Huang
    Xiang Li
    Renjie Li
    Naimeng Yao
    Xinyi Wang
    Xiaotong Gu
    Muhammad Bilal Amin
    Byeong Kang
    Human-Centric Intelligent Systems, 2023, 3 (3): : 161 - 188
  • [45] A literature review of artificial intelligence (AI) for medical image segmentation: from AI and explainable AI to trustworthy AI
    Teng, Zixuan
    Li, Lan
    Xin, Ziqing
    Xiang, Dehui
    Huang, Jiang
    Zhou, Hailing
    Shi, Fei
    Zhu, Weifang
    Cai, Jing
    Peng, Tao
    Chen, Xinjian
    QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2024, 14 (12) : 9620 - 9652
  • [46] Prediction of Postsurgical Infection by Explainable AI and Strategic Data Imputation
    Guillen-Ramirez, Hugo
    Sanchez-Taltavull, Daniel
    Peisl, Sarah
    Perrodin, Stephanie
    Triep, Karen
    Endrich, Olga
    Beldi, Guido
    SWISS MEDICAL WEEKLY, 2024, 154 : 18S - 18S
  • [47] Explainable AI decision model for ECG data of cardiac disorders
    Anand, Atul
    Kadian, Tushar
    Shetty, Manu Kumar
    Gupta, Anubha
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 75
  • [48] Explainable AI (ex-AI)
    Holzinger, Andreas
    Informatik-Spektrum, 2018, 41 (02) : 138 - 143
  • [49] From "Explainable AI" to "Graspable AI"
    Ghajargar, Maliheh
    Bardzell, Jeffrey
    Renner, Alison Smith
    Krogh, Peter Gall
    Hook, Kristina
    Cuartielles, David
    Boer, Laurens
    Wiberg, Mikael
    PROCEEDINGS OF THE FIFTEENTH INTERNATIONAL CONFERENCE ON TANGIBLE, EMBEDDED, AND EMBODIED INTERACTION, TEI 2021, 2021,
  • [50] Perturbation-Based Explainable AI for ECG Sensor Data
    Paralic, Jan
    Kolarik, Michal
    Paralicova, Zuzana
    Lohaj, Oliver
    Jozefik, Adam
    APPLIED SCIENCES-BASEL, 2023, 13 (03):