Local and Global Interpretability Using Mutual Information in Explainable Artificial Intelligence

被引:1
|
作者
Islam, Mir Riyanul [1 ]
Ahmed, Mobyen Uddin [1 ]
Begum, Shahina [1 ]
机构
[1] Malardalen Univ, Sch Innovat Design & Engn, Vasteras, Sweden
基金
瑞典研究理事会;
关键词
autoencoder; electroencephalography; explainability; feature extraction; mental workload; mutual information;
D O I
10.1109/ISCMI53840.2021.9654898
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Numerous studies have exploited the potential of Artificial Intelligence (AI) and Machine Learning (ML) models to develop intelligent systems in diverse domains for complex tasks, such as analysing data, extracting features, prediction, recommendation etc. However, presently these systems embrace acceptability issues from the end-users. The models deployed at the back of the systems mostly analyse the correlations or dependencies between the input and output to uncover the important characteristics of the input features, but they lack explainability and interpretability that causing the acceptability issues of intelligent systems and raising the research domain of eXplainable Artificial Intelligence (XAI). In this study, to overcome these shortcomings, a hybrid XAI approach is developed to explain an AI/ML model's inference mechanism as well as the final outcome. The overall approach comprises of 1) a convolutional encoder that extracts deep features from the data and computes their relevancy with features extracted using domain knowledge, 2) a model for classifying data points using the features from autoencoder, and 3) a process of explaining the model's working procedure and decisions using mutual information to provide global and local interpretability. To demonstrate and validate the proposed approach, experimentation was performed using an electroencephalography dataset from road safety to classify drivers' in-vehicle mental workload. The outcome of the experiment was found to be promising that produced a Support Vector Machine classifier for mental workload with approximately 89% performance accuracy. Moreover, the proposed approach can also provide an explanation for the classifier model's behaviour and decisions with the combined illustration of Shapely values and mutual information.
引用
收藏
页码:191 / 195
页数:5
相关论文
共 50 条
  • [31] Explainable Artificial Intelligence: A Survey
    Dosilovic, Filip Karlo
    Brcic, Mario
    Hlupic, Nikica
    2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2018, : 210 - 215
  • [32] Information-seeking dialogue for explainable artificial intelligence: Modelling and analytics
    Stepin, Ilia
    Budzynska, Katarzyna
    Catala, Alejandro
    Pereira-Farina, Martin
    Alonso-Moral, Jose M.
    ARGUMENT & COMPUTATION, 2024, 15 (01) : 49 - 107
  • [33] Enhancing Interpretability in Drill Bit Wear Analysis through Explainable Artificial Intelligence: A Grad-CAM Approach
    Senjoba, Lesego
    Ikeda, Hajime
    Toriya, Hisatoshi
    Adachi, Tsuyoshi
    Kawamura, Youhei
    APPLIED SCIENCES-BASEL, 2024, 14 (09):
  • [34] On Using Explainable Artificial Intelligence for Failure Identification in Microwave Networks
    Ayoub, Omran
    Musumeci, Francesco
    Ezzeddine, Fatima
    Passera, Claudio
    Tornatore, Massimo
    25TH CONFERENCE ON INNOVATION IN CLOUDS, INTERNET AND NETWORKS (ICIN 2022), 2022, : 48 - 55
  • [35] Explainable artificial intelligence to detect atrial fibrillation using electrocardiogram
    Jo, Yong-Yeon
    Cho, Younghoon
    Lee, Soo Youn
    Kwon, Joon-myoung
    Kim, Kyung-Hee
    Jeon, Ki-Hyun
    Cho, Soohyun
    Park, Jinsik
    Oh, Byung-Hee
    INTERNATIONAL JOURNAL OF CARDIOLOGY, 2021, 328 : 104 - 110
  • [36] Interretation of load forecasting using explainable artificial intelligence techniques
    Lee Y.-G.
    Oh J.-Y.
    Kim G.
    Kim, Gibak (imkgb27@ssu.ac.kr), 1600, Korean Institute of Electrical Engineers (69): : 480 - 485
  • [37] Explainable Artificial Intelligence for Prediction of Diabetes using Stacking Classifier
    Devi, Aruna B.
    Karthik, N.
    10TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTING AND COMMUNICATION TECHNOLOGIES, CONECCT 2024, 2024,
  • [38] Identifying preflare spectral features using explainable artificial intelligence
    Panos, Brandon
    Kleint, Lucia
    Zbinden, Jonas
    ASTRONOMY & ASTROPHYSICS, 2023, 671
  • [39] Analyzing credit spread changes using explainable artificial intelligence
    Heger, Julia
    Min, Aleksey
    Zagst, Rudi
    INTERNATIONAL REVIEW OF FINANCIAL ANALYSIS, 2024, 94
  • [40] A Deep Diagnostic Framework Using Explainable Artificial Intelligence and Clustering
    Thunold, Havard Horgen
    Riegler, Michael A.
    Yazidi, Anis
    Hammer, Hugo L.
    Isomoto, Hajime
    Marquering, Henk A.
    DIAGNOSTICS, 2023, 13 (22)