Explaining deep neural networks: A survey on the global interpretation methods

被引:52
|
作者
Saleem, Rabia [1 ]
Yuan, Bo [2 ]
Kurugollu, Fatih [1 ,3 ]
Anjum, Ashiq [2 ]
Liu, Lu [2 ]
机构
[1] Univ Derby, Sch Comp & Engn, Kedleston Rd, Derby DE22 1GB, England
[2] Univ Leicester, Sch Comp & Math Sci, Univ Rd, Leicester LE1 7RH, England
[3] Univ Sharjah, Dept Comp Sci, Sharjah, U Arab Emirates
关键词
Artificial intelligence; Deep neural networks; Black box Models; Explainable artificial intelligence; Global interpretation; BLACK-BOX; CLASSIFIERS; RULES; MODEL;
D O I
10.1016/j.neucom.2022.09.129
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A substantial amount of research has been carried out in Explainable Artificial Intelligence (XAI) models, especially in those which explain the deep architectures of neural networks. A number of XAI approaches have been proposed to achieve trust in Artificial Intelligence (AI) models as well as provide explainability of specific decisions made within these models. Among these approaches, global interpretation methods have emerged as the prominent methods of explainability because they have the strength to explain every feature and the structure of the model. This survey attempts to provide a comprehensive review of global interpretation methods that completely explain the behaviour of the AI models. We present a taxonomy of the available global interpretations models and systematically highlight the critical features and algorithms that differentiate them from local as well as hybrid models of explainability. Through examples and case studies from the literature, we evaluate the strengths and weaknesses of the global interpretation models and assess challenges when these methods are put into practice. We conclude the paper by providing the future directions of research in how the existing challenges in global interpre-tation methods could be addressed and what values and opportunities could be realized by the resolution of these challenges.(c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:165 / 180
页数:16
相关论文
共 50 条
  • [41] Explaining Deep Neural Networks for Bearing Fault Detection with Vibration Concepts
    Decker, Thomas
    Lebacher, Michael
    Tresp, Volker
    2023 IEEE 21ST INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS, INDIN, 2023,
  • [42] Explaining decisions of deep neural networks used for fish age prediction
    Ordonez, Alba
    Eikvil, Line
    Salberg, Arnt-Borre
    Harbitz, Alf
    Murray, Sean Meling
    Kampffmeyer, Michael C.
    PLOS ONE, 2020, 15 (06):
  • [43] NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning
    Alzantot, Moustafa
    Widdicombe, Amy
    Julier, Simon
    Srivastava, Mani
    2019 IEEE INTERNATIONAL CONFERENCE ON SMART COMPUTING (SMARTCOMP 2019), 2019, : 81 - 86
  • [44] A Benchmark for Interpretability Methods in Deep Neural Networks
    Hooker, Sara
    Erhan, Dumitru
    Kindermans, Pieter-Jan
    Kim, Been
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [45] Methods for interpreting and understanding deep neural networks
    Montavon, Gregoire
    Samek, Wojciech
    Mueller, Klaus-Robert
    DIGITAL SIGNAL PROCESSING, 2018, 73 : 1 - 15
  • [46] Distributed Newton Methods for Deep Neural Networks
    Wang, Chien-Chih
    Tan, Kent Loong
    Chen, Chun-Ting
    Lin, Yu-Hsiang
    Keerthi, S. Sathiya
    Mahajan, Dhruv
    Sundararajan, S.
    Lin, Chih-Jen
    NEURAL COMPUTATION, 2018, 30 (06) : 1673 - 1724
  • [47] Automated interpretation of the coronary angioscopy with deep convolutional neural networks
    Miyoshi, Toru
    Higaki, Akinori
    Kawakami, Hideo
    Yamaguchi, Osamu
    OPEN HEART, 2020, 7 (01):
  • [48] Seismic fault interpretation based on deep convolutional neural networks
    Chang D.
    Yong X.
    Wang Y.
    Yang W.
    Li H.-S.
    Zhang G.
    Chang, Dekuan (changdk@petrochina.com.cn), 1600, Science Press (56): : 1 - 8
  • [49] Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization
    Noh, Hyeonwoo
    You, Tackgeun
    Mun, Jonghwan
    Han, Bohyung
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [50] Synchrosqueezing voices through deep neural networks for horizon interpretation
    AlSalmi, Haifa
    Wang, Yanghua
    INTERPRETATION-A JOURNAL OF SUBSURFACE CHARACTERIZATION, 2024, 12 (03): : SE89 - SE102