Local and Global Interpretability Using Mutual Information in Explainable Artificial Intelligence

被引:1
|
作者
Islam, Mir Riyanul [1 ]
Ahmed, Mobyen Uddin [1 ]
Begum, Shahina [1 ]
机构
[1] Malardalen Univ, Sch Innovat Design & Engn, Vasteras, Sweden
基金
瑞典研究理事会;
关键词
autoencoder; electroencephalography; explainability; feature extraction; mental workload; mutual information;
D O I
10.1109/ISCMI53840.2021.9654898
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Numerous studies have exploited the potential of Artificial Intelligence (AI) and Machine Learning (ML) models to develop intelligent systems in diverse domains for complex tasks, such as analysing data, extracting features, prediction, recommendation etc. However, presently these systems embrace acceptability issues from the end-users. The models deployed at the back of the systems mostly analyse the correlations or dependencies between the input and output to uncover the important characteristics of the input features, but they lack explainability and interpretability that causing the acceptability issues of intelligent systems and raising the research domain of eXplainable Artificial Intelligence (XAI). In this study, to overcome these shortcomings, a hybrid XAI approach is developed to explain an AI/ML model's inference mechanism as well as the final outcome. The overall approach comprises of 1) a convolutional encoder that extracts deep features from the data and computes their relevancy with features extracted using domain knowledge, 2) a model for classifying data points using the features from autoencoder, and 3) a process of explaining the model's working procedure and decisions using mutual information to provide global and local interpretability. To demonstrate and validate the proposed approach, experimentation was performed using an electroencephalography dataset from road safety to classify drivers' in-vehicle mental workload. The outcome of the experiment was found to be promising that produced a Support Vector Machine classifier for mental workload with approximately 89% performance accuracy. Moreover, the proposed approach can also provide an explanation for the classifier model's behaviour and decisions with the combined illustration of Shapely values and mutual information.
引用
收藏
页码:191 / 195
页数:5
相关论文
共 50 条
  • [11] On the information content of explainable artificial intelligence for quantitative approaches in finance
    Berger, Theo
    OR SPECTRUM, 2025, 47 (01) : 177 - 203
  • [12] Explainable Artificial Intelligence to Detect Breast Cancer: A Qualitative Case-Based Visual Interpretability Approach
    Rodriguez-Sampaio, M.
    Rincon, M.
    Valladares-Rodriguez, S.
    Bachiller-Mayoral, M.
    ARTIFICIAL INTELLIGENCE IN NEUROSCIENCE: AFFECTIVE ANALYSIS AND HEALTH APPLICATIONS, PT I, 2022, 13258 : 557 - 566
  • [13] Food fraud detection using explainable artificial intelligence
    Buyuktepe, Okan
    Catal, Cagatay
    Kar, Gorkem
    Bouzembrak, Yamine
    Marvin, Hans
    Gavai, Anand
    EXPERT SYSTEMS, 2025, 42 (01)
  • [14] What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
    Lin, Yi-Shan
    Lee, Wen-Chuan
    Celik, Z. Berkay
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1027 - 1035
  • [15] A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
    Mohale, Vincent Zibi
    Obagbuwa, Ibidun Christiana
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2025, 8
  • [16] ACCELERATING BRAIN RESEARCH USING EXPLAINABLE ARTIFICIAL INTELLIGENCE
    Chou, Jing-Lun
    Huang, Ya-Lin
    Hsieh, Chia-Ying
    Huang, Jian-Xue
    Wei, Chun-Shu
    2022 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (IEEE ICMEW 2022), 2022,
  • [17] Explainable Artificial Intelligence Using Expressive Boolean Formulas
    Rosenberg, Gili
    Brubaker, John Kyle
    Schuetz, Martin J. A.
    Salton, Grant
    Zhu, Zhihuai
    Zhu, Elton Yechao
    Kadioglu, Serdar
    Borujeni, Sima E.
    Katzgraber, Helmut G.
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2023, 5 (04): : 1760 - 1795
  • [18] Improving Crowdfunding Decisions Using Explainable Artificial Intelligence
    Gregoriades, Andreas
    Themistocleous, Christos
    SUSTAINABILITY, 2025, 17 (04)
  • [19] Diagnosis of Acute Poisoning using explainable artificial intelligence
    Chary, Michael
    Boyer, Ed W.
    Burns, Michele M.
    COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 134
  • [20] Global gene network exploration based on explainable artificial intelligence approach
    Park, Heewon
    Maruhashi, Koji
    Yamaguchi, Rui
    Imoto, Seiya
    Miyano, Satoru
    PLOS ONE, 2020, 15 (11):