Local and Global Interpretability Using Mutual Information in Explainable Artificial Intelligence

被引:1
|
作者
Islam, Mir Riyanul [1 ]
Ahmed, Mobyen Uddin [1 ]
Begum, Shahina [1 ]
机构
[1] Malardalen Univ, Sch Innovat Design & Engn, Vasteras, Sweden
基金
瑞典研究理事会;
关键词
autoencoder; electroencephalography; explainability; feature extraction; mental workload; mutual information;
D O I
10.1109/ISCMI53840.2021.9654898
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Numerous studies have exploited the potential of Artificial Intelligence (AI) and Machine Learning (ML) models to develop intelligent systems in diverse domains for complex tasks, such as analysing data, extracting features, prediction, recommendation etc. However, presently these systems embrace acceptability issues from the end-users. The models deployed at the back of the systems mostly analyse the correlations or dependencies between the input and output to uncover the important characteristics of the input features, but they lack explainability and interpretability that causing the acceptability issues of intelligent systems and raising the research domain of eXplainable Artificial Intelligence (XAI). In this study, to overcome these shortcomings, a hybrid XAI approach is developed to explain an AI/ML model's inference mechanism as well as the final outcome. The overall approach comprises of 1) a convolutional encoder that extracts deep features from the data and computes their relevancy with features extracted using domain knowledge, 2) a model for classifying data points using the features from autoencoder, and 3) a process of explaining the model's working procedure and decisions using mutual information to provide global and local interpretability. To demonstrate and validate the proposed approach, experimentation was performed using an electroencephalography dataset from road safety to classify drivers' in-vehicle mental workload. The outcome of the experiment was found to be promising that produced a Support Vector Machine classifier for mental workload with approximately 89% performance accuracy. Moreover, the proposed approach can also provide an explanation for the classifier model's behaviour and decisions with the combined illustration of Shapely values and mutual information.
引用
收藏
页码:191 / 195
页数:5
相关论文
共 50 条
  • [1] Explainable artificial intelligence for mental health through transparency and interpretability for understandability
    Dan W. Joyce
    Andrey Kormilitzin
    Katharine A. Smith
    Andrea Cipriani
    npj Digital Medicine, 6
  • [2] Explainable artificial intelligence for mental health through transparency and interpretability for understandability
    Joyce, Dan W.
    Kormilitzin, Andrey
    Smith, Katharine A.
    Cipriani, Andrea
    NPJ DIGITAL MEDICINE, 2023, 6 (01)
  • [3] Interpretability and Transparency of Machine Learning in File Fragment Analysis with Explainable Artificial Intelligence
    Jinad, Razaq
    Islam, A. B. M.
    Shashidhar, Narasimha
    ELECTRONICS, 2024, 13 (13)
  • [4] EXS: Explainable Search Using Local Model Agnostic Interpretability
    Singh, Jaspreet
    Anand, Avishek
    PROCEEDINGS OF THE TWELFTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'19), 2019, : 770 - 773
  • [5] Leveraging Responsible, Explainable, and Local Artificial Intelligence Solutions for Clinical Public Health in the Global South
    Kong, Jude Dzevela
    Akpudo, Ugochukwu Ejike
    Effoduh, Jake Okechukwu
    Bragazzi, Nicola Luigi
    HEALTHCARE, 2023, 11 (04)
  • [6] Explainable Matrix - Visualization for Global and Local Interpretability of Random Forest Classification Ensembles
    Popolin Neto, Mario
    Paulovich, Fernando V.
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2021, 27 (02) : 1427 - 1437
  • [7] Explainable artificial intelligence
    Wickramasinghe, Chathurika S.
    Marino, Daniel
    Amarasinghe, Kasun
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [8] Evaluation Metrics Research for Explainable Artificial Intelligence Global Methods Using Synthetic Data
    Oblizanov, Alexandr
    Shevskaya, Natalya
    Kazak, Anatoliy
    Rudenko, Marina
    Dorofeeva, Anna
    APPLIED SYSTEM INNOVATION, 2023, 6 (01)
  • [9] Explainable Artificial Intelligence for Crowd Forecasting Using Global Ensemble Echo State Networks
    Samarajeewa, Chamod
    De Silva, Daswin
    Manic, Milos
    Mills, Nishan
    Rathnayaka, Prabod
    Jennings, Andrew
    IEEE OPEN JOURNAL OF THE INDUSTRIAL ELECTRONICS SOCIETY, 2024, 5 : 415 - 427
  • [10] Explainable artificial intelligence enhances the ecological interpretability of black-box species distribution models
    Ryo, Masahiro
    Angelov, Boyan
    Mammola, Stefano
    Kass, Jamie M.
    Benito, Blas M.
    Hartig, Florian
    ECOGRAPHY, 2021, 44 (02) : 199 - 205