Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification

被引:0
|
作者
Deshpande, Gopikrishna [1 ,2 ,3 ,4 ,5 ,6 ]
Masood, Janzaib [1 ]
Huynh, Nguyen [1 ]
Denney Jr, Thomas S. [1 ,2 ,3 ,4 ]
Dretsch, Michael N. [7 ]
机构
[1] Auburn Univ, Neuroimaging Ctr, Dept Elect & Comp Engn, Auburn, AL 36849 USA
[2] Auburn Univ, Dept Psychol Sci, Auburn, AL 36849 USA
[3] Alabama Adv Imaging Consortium, Birmingham, AL 35294 USA
[4] Auburn Univ, Ctr Neurosci, Auburn, AL 36849 USA
[5] Natl Inst Mental Hlth & Neurosci, Dept Psychiat, Bengaluru 560029, India
[6] Indian Inst Technol Hyderabad, Dept Heritage Sci & Technol, Hyderabad 502285, India
[7] Joint Base Lewis McChord, Walter Reed Army Inst Res West, West, WA 98433 USA
关键词
Resting-state functional magnetic resonance; resting-state functional connectivity; interpretable deep learning; POSTTRAUMATIC-STRESS-DISORDER; ANTERIOR CINGULATE CORTEX; RESTING-STATE FMRI; FUNCTIONAL CONNECTIVITY; NETWORKS; ABUSE; MEMORIES; VETERANS; DISEASE; SERVICE;
D O I
10.1109/ACCESS.2024.3388911
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNN) are increasingly being used in neuroimaging research for the diagnosis of brain disorders and understanding of human brain. Despite their impressive performance, their usage in medical applications will be limited unless there is more transparency on how these algorithms arrive at their decisions. We address this issue in the current report. A DNN classifier was trained to discriminate between healthy subjects and those with posttraumatic stress disorder (PTSD) using brain connectivity obtained from functional magnetic resonance imaging data. The classifier provided 90% accuracy. Brain connectivity features important for classification were generated for a pool of test subjects and permutation testing was used to identify significantly discriminative connections. Such heatmaps of significant paths were generated from 10 different interpretability algorithms based on variants of layer-wise relevance and gradient attribution methods. Since different interpretability algorithms make different assumptions about the data and model, their explanations had both commonalities and differences. Therefore, we developed a consensus across interpretability methods, which aligned well with the existing knowledge about brain alterations underlying PTSD. The confident identification of more than 20 regions, acknowledged for their relevance to PTSD in prior studies, was achieved with a voting score exceeding 8 and a family-wise correction threshold below 0.05. Our work illustrates how robustness and physiological plausibility of explanations can be achieved in interpreting classifications obtained from DNNs in diagnostic neuroimaging applications by evaluating convergence across methods. This will be crucial for trust in AI-based medical diagnostics in the future.
引用
收藏
页码:55474 / 55490
页数:17
相关论文
共 50 条
  • [41] A case-based interpretable deep learning model for classification of mass lesions in digital mammography
    Alina Jade Barnett
    Fides Regina Schwartz
    Chaofan Tao
    Chaofan Chen
    Yinhao Ren
    Joseph Y. Lo
    Cynthia Rudin
    Nature Machine Intelligence, 2021, 3 : 1061 - 1070
  • [42] Radiomics Diagnostic Tool Based on Deep Learning for Colposcopy Image Classification
    Jimenez Gaona, Yuliana
    Castillo Malla, Darwin
    Vega Crespo, Bernardo
    Jose Vicuna, Maria
    Alejandra Neira, Vivian
    Davila, Santiago
    Verhoeven, Veronique
    DIAGNOSTICS, 2022, 12 (07)
  • [43] Neuroimaging-based diagnosis of Parkinson's disease with deep neural mapping large margin distribution machine
    Gong, Bangming
    Shi, Jun
    Ying, Shihui
    Dai, Yakang
    Zhang, Qi
    Dong, Yun
    An, Hedi
    Zhang, Yingchun
    NEUROCOMPUTING, 2018, 320 : 141 - 149
  • [44] Interpretable brain disease classification and relevance-guided deep learning
    Tinauer, Christian
    Heber, Stefan
    Pirpamer, Lukas
    Damulina, Anna
    Schmidt, Reinhold
    Stollberger, Rudolf
    Ropele, Stefan
    Langkammer, Christian
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [45] Interpretable brain disease classification and relevance-guided deep learning
    Christian Tinauer
    Stefan Heber
    Lukas Pirpamer
    Anna Damulina
    Reinhold Schmidt
    Rudolf Stollberger
    Stefan Ropele
    Christian Langkammer
    Scientific Reports, 12
  • [46] An Interpretable Ensemble Deep Learning Model for Diabetic Retinopathy Disease Classification
    Jiang, Hongyang
    Yang, Kang
    Gao, Mengdi
    Zhang, Dongdong
    Ma, He
    Qian, Wei
    2019 41ST ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2019, : 2045 - 2048
  • [47] Toward neuroimaging-based network biomarkers for transient ischemic attack
    Lv, Yating
    Han, Xiujie
    Song, Yulin
    Han, Yu
    Zhou, Chengshu
    Zhou, Dan
    Zhang, Fuding
    Xue, Qiming
    Liu, Jinling
    Zhao, Lijuan
    Zhang, Cairong
    Li, Lingyu
    Wane, Jinhui
    HUMAN BRAIN MAPPING, 2019, 40 (11) : 3347 - 3361
  • [48] Are deep learning classification results obtained on CT scans fair and interpretable?
    Ashames, Mohamad M. A.
    Demir, Ahmet
    Gerek, Omer N.
    Fidan, Mehmet
    Gulmezoglu, M. Bilginer
    Ergin, Semih
    Edizkan, Rifat
    Koc, Mehmet
    Barkana, Atalay
    Calisir, Cuneyt
    PHYSICAL AND ENGINEERING SCIENCES IN MEDICINE, 2024, 47 (03) : 967 - 979
  • [49] A neuroimaging-based model for disease progression in Parkinson's disease
    Shenkov, N.
    Klyuzhin, I.
    Rahmim, A.
    Sossi, V.
    MOVEMENT DISORDERS, 2017, 32
  • [50] The effect of mental countermeasures on neuroimaging-based concealed information tests
    Hsu, Chun-Wei
    Begliomini, Chiara
    Dall'Acqua, Tommaso
    Ganis, Giorgio
    HUMAN BRAIN MAPPING, 2019, 40 (10) : 2899 - 2916