Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance Computing

被引:4
|
作者
Chen, Zhouyuan [1 ]
Lian, Zhichao [1 ]
Xu, Zhe [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Cyberspace Secur, Nanjing 214400, Peoples R China
关键词
interpretability; model-agnostic explanations; feature relationship; super pixel;
D O I
10.3390/axioms12100997
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
In the explainable artificial intelligence (XAI) field, an algorithm or a tool can help people understand how a model makes a decision. And this can help to select important features to reduce computational costs to realize high-performance computing. But existing methods are usually used to visualize important features or highlight active neurons, and few of them show the importance of relationships between features. In recent years, some methods based on a white-box approach have taken relationships between features into account, but most of them can only work on some specific models. Although methods based on a black-box approach can solve the above problems, most of them can only be applied to tabular data or text data instead of image data. To solve these problems, we propose a local interpretable model-agnostic explanation approach based on feature relationships. This approach combines the relationships between features into the interpretation process and then visualizes the interpretation results. Finally, this paper conducts a lot of experiments to evaluate the correctness of relationships between features and evaluates this XAI method in terms of accuracy, fidelity, and consistency.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] LIVE: A Local Interpretable model-agnostic Visualizations and Explanations
    Shi, Peichang
    Gangopadhyay, Aryya
    Yu, Ping
    2022 IEEE 10TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2022), 2022, : 245 - 254
  • [2] Stable local interpretable model-agnostic explanations based on a variational autoencoder
    Xiang, Xu
    Yu, Hong
    Wang, Ye
    Wang, Guoyin
    APPLIED INTELLIGENCE, 2023, 53 (23) : 28226 - 28240
  • [3] Pixel-Based Clustering for Local Interpretable Model-Agnostic Explanations
    Qian, Junyan
    Wen, Tong
    Ling, Ming
    Du, Xiaofu
    Ding, Hao
    JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH, 2025, 15 (03) : 257 - 277
  • [4] Stable local interpretable model-agnostic explanations based on a variational autoencoder
    Xu Xiang
    Hong Yu
    Ye Wang
    Guoyin Wang
    Applied Intelligence, 2023, 53 : 28226 - 28240
  • [5] Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
    Zafar, Muhammad Rehman
    Khan, Naimul
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2021, 3 (03): : 525 - 541
  • [6] Causality-Aware Local Interpretable Model-Agnostic Explanations
    Cinquin, Martina
    Guidotti, Riccardo
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, PT III, XAI 2024, 2024, 2155 : 108 - 124
  • [7] Local Interpretable Model-Agnostic Explanations for Classification of Lymph Node Metastases
    de Sousa, Iam Palatnik
    Bernardes Rebuzzi Vellasco, Marley Maria
    da Silva, Eduardo Costa
    SENSORS, 2019, 19 (13)
  • [8] Interpretable heartbeat classification using local model-agnostic explanations on ECGs
    Neves, Ines
    Folgado, Duarte
    Santos, Sara
    Barandas, Marilia
    Campagner, Andrea
    Ronzio, Luca
    Cabitza, Federico
    Gamboa, Hugo
    COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 133
  • [9] Feasibility of local interpretable model-agnostic explanations (LIME) algorithm as an effective and interpretable feature selection method: comparative fNIRS study
    Shin, Jaeyoung
    BIOMEDICAL ENGINEERING LETTERS, 2023, 13 (04) : 689 - 703
  • [10] Feasibility of local interpretable model-agnostic explanations (LIME) algorithm as an effective and interpretable feature selection method: comparative fNIRS study
    Jaeyoung Shin
    Biomedical Engineering Letters, 2023, 13 : 689 - 703