Explaining deep neural networks: A survey on the global interpretation methods

被引:52
|
作者
Saleem, Rabia [1 ]
Yuan, Bo [2 ]
Kurugollu, Fatih [1 ,3 ]
Anjum, Ashiq [2 ]
Liu, Lu [2 ]
机构
[1] Univ Derby, Sch Comp & Engn, Kedleston Rd, Derby DE22 1GB, England
[2] Univ Leicester, Sch Comp & Math Sci, Univ Rd, Leicester LE1 7RH, England
[3] Univ Sharjah, Dept Comp Sci, Sharjah, U Arab Emirates
关键词
Artificial intelligence; Deep neural networks; Black box Models; Explainable artificial intelligence; Global interpretation; BLACK-BOX; CLASSIFIERS; RULES; MODEL;
D O I
10.1016/j.neucom.2022.09.129
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A substantial amount of research has been carried out in Explainable Artificial Intelligence (XAI) models, especially in those which explain the deep architectures of neural networks. A number of XAI approaches have been proposed to achieve trust in Artificial Intelligence (AI) models as well as provide explainability of specific decisions made within these models. Among these approaches, global interpretation methods have emerged as the prominent methods of explainability because they have the strength to explain every feature and the structure of the model. This survey attempts to provide a comprehensive review of global interpretation methods that completely explain the behaviour of the AI models. We present a taxonomy of the available global interpretations models and systematically highlight the critical features and algorithms that differentiate them from local as well as hybrid models of explainability. Through examples and case studies from the literature, we evaluate the strengths and weaknesses of the global interpretation models and assess challenges when these methods are put into practice. We conclude the paper by providing the future directions of research in how the existing challenges in global interpre-tation methods could be addressed and what values and opportunities could be realized by the resolution of these challenges.(c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:165 / 180
页数:16
相关论文
共 50 条
  • [21] An Information Theoretic Interpretation to Deep Neural Networks
    Xu, Xiangxiang
    Huang, Shao-Lun
    Zheng, Lizhong
    Wornell, Gregory W.
    ENTROPY, 2022, 24 (01)
  • [22] Methods for Pruning Deep Neural Networks
    Vadera, Sunil
    Ameen, Salem
    IEEE ACCESS, 2022, 10 : 63280 - 63300
  • [23] Explaining deep neural networks processing raw diagnostic signals
    Herwig, Nico
    Borghesani, Pietro
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2023, 200
  • [24] Explaining deep neural networks for knowledge discovery in electrocardiogram analysis
    Steven A. Hicks
    Jonas L. Isaksen
    Vajira Thambawita
    Jonas Ghouse
    Gustav Ahlberg
    Allan Linneberg
    Niels Grarup
    Inga Strümke
    Christina Ellervik
    Morten Salling Olesen
    Torben Hansen
    Claus Graff
    Niels-Henrik Holstein-Rathlou
    Pål Halvorsen
    Mary M. Maleckar
    Michael A. Riegler
    Jørgen K. Kanters
    Scientific Reports, 11
  • [25] ON NETWORK SCIENCE AND MUTUAL INFORMATION FOR EXPLAINING DEEP NEURAL NETWORKS
    Davis, Brian
    Bhatt, Umang
    Bhardwaj, Kartikeya
    Marculescu, Radu
    Moura, Jose M. P.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8399 - 8403
  • [26] Towards Explaining Deep Neural Networks Through Graph Analysis
    Horta, Vitor A. C.
    Mileo, Alessandra
    DATABASE AND EXPERT SYSTEMS APPLICATIONS (DEXA 2019), 2019, 1062 : 155 - 165
  • [27] Treeview and Disentangled Representations for Explaining Deep Neural Networks Decisions
    Sattigeri, Prasanna
    Ramamurthy, Karthikeyan Natesan
    Thiagarajan, Jayaraman J.
    Kailkhura, Bhavya
    2020 54TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2020, : 284 - 288
  • [28] Explaining deep neural networks for knowledge discovery in electrocardiogram analysis
    Hicks, Steven A.
    Isaksen, Jonas L.
    Thambawita, Vajira
    Ghouse, Jonas
    Ahlberg, Gustav
    Linneberg, Allan
    Grarup, Niels
    Strumke, Inga
    Ellervik, Christina
    Olesen, Morten Salling
    Hansen, Torben
    Graff, Claus
    Holstein-Rathlou, Niels-Henrik
    Halvorsen, Pal
    Maleckar, Mary M.
    Riegler, Michael A.
    Kanters, Jorgen K.
    SCIENTIFIC REPORTS, 2021, 11 (01)
  • [29] Interpretation of Deep Neural Networks Based on Decision Trees
    Ueno, Tsukasa
    Zhao, Qiangfu
    2018 16TH IEEE INT CONF ON DEPENDABLE, AUTONOM AND SECURE COMP, 16TH IEEE INT CONF ON PERVAS INTELLIGENCE AND COMP, 4TH IEEE INT CONF ON BIG DATA INTELLIGENCE AND COMP, 3RD IEEE CYBER SCI AND TECHNOL CONGRESS (DASC/PICOM/DATACOM/CYBERSCITECH), 2018, : 256 - 261
  • [30] Deep Neural Networks and Tabular Data: A Survey
    Borisov, Vadim
    Leemann, Tobias
    Sessler, Kathrin
    Haug, Johannes
    Pawelczyk, Martin
    Kasneci, Gjergji
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7499 - 7519