TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security

被引:53
|
作者
Zolanvari, Maede [1 ]
Yang, Zebo [1 ]
Khan, Khaled [2 ]
Jain, Raj [1 ]
Meskin, Nader [2 ]
机构
[1] Washington Univ, Dept Comp Sci & Engn, St Louis, MO 63130 USA
[2] Qatar Univ, Dept Comp Sci & Engn, Doha, Qatar
关键词
Artificial intelligence; Industrial Internet of Things; Numerical models; Mathematical models; Computational modeling; Data models; Predictive models; Artificial intelligence (AI); cybersecurity; explainable AI (XAI); Industrial Internet of Things (IIoT); machine learning (ML); statistical modeling; trustworthy AI; MACHINE; INTERNET;
D O I
10.1109/JIOT.2021.3122019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite artificial intelligence (AI)'s significant growth, its "black box" nature creates challenges in generating adequate trust. Thus, it is seldom utilized as a standalone unit in IoT high-risk applications, such as critical industrial infrastructures, medical systems, financial applications, etc. Explainable AI (XAI) has emerged to help with this problem. However, designing appropriately fast and accurate XAI is still challenging, especially in numerical applications. Here, we propose a universal XAI model, named the transparency relying upon statistical theory (TRUST), which is model-agnostic, high performing, and suitable for numerical applications. Simply put, TRUST XAI models the statistical behavior of the AI's outputs in an AI-based system. Factor analysis is used to transform the input features into a new set of latent variables. We use mutual information (MI) to rank these variables and pick only the most influential ones on the AI's outputs and call them "representatives" of the classes. Then, we use multimodal Gaussian (MMG) distributions to determine the likelihood of any new sample belonging to each class. We demonstrate the effectiveness of TRUST in a case study on cybersecurity of the Industrial Internet of Things (IIoT) using three different cybersecurity data sets. As IIoT is a prominent application that deals with numerical data. The results show that TRUST XAI provides explanations for new random samples with an average success rate of 98%. Compared with local interpretable model-agnostic explanations (LIME), a popular XAI model, TRUST is shown to be superior in the context of performance, speed, and the method of explainability. In the end, we also show how TRUST is explained to the user.
引用
收藏
页码:2967 / 2978
页数:12
相关论文
共 50 条
  • [21] Local Interpretable Model-Agnostic Explanations for Classification of Lymph Node Metastases
    de Sousa, Iam Palatnik
    Bernardes Rebuzzi Vellasco, Marley Maria
    da Silva, Eduardo Costa
    SENSORS, 2019, 19 (13)
  • [22] MANE: Model-Agnostic Non-linear Explanations for Deep Learning Model
    Tian, Yue
    Liu, Guanjun
    2020 IEEE WORLD CONGRESS ON SERVICES (SERVICES), 2020, : 33 - 36
  • [23] Stable local interpretable model-agnostic explanations based on a variational autoencoder
    Xiang, Xu
    Yu, Hong
    Wang, Ye
    Wang, Guoyin
    APPLIED INTELLIGENCE, 2023, 53 (23) : 28226 - 28240
  • [24] Pixel-Based Clustering for Local Interpretable Model-Agnostic Explanations
    Qian, Junyan
    Wen, Tong
    Ling, Ming
    Du, Xiaofu
    Ding, Hao
    JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH, 2025, 15 (03) : 257 - 277
  • [25] Stable local interpretable model-agnostic explanations based on a variational autoencoder
    Xu Xiang
    Hong Yu
    Ye Wang
    Guoyin Wang
    Applied Intelligence, 2023, 53 : 28226 - 28240
  • [26] A Generic and Model-Agnostic Exemplar Synthetization Framework for Explainable AI
    Barbalau, Antonio
    Cosma, Adrian
    Ionescu, Radu Tudor
    Popescu, Marius
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 190 - 205
  • [27] Interpretable heartbeat classification using local model-agnostic explanations on ECGs
    Neves, Ines
    Folgado, Duarte
    Santos, Sara
    Barandas, Marilia
    Campagner, Andrea
    Ronzio, Luca
    Cabitza, Federico
    Gamboa, Hugo
    COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 133
  • [28] Individualized help for at-risk students using model-agnostic and counterfactual explanations
    Smith, Bevan, I
    Chimedza, Charles
    Buhrmann, Jacoba H.
    EDUCATION AND INFORMATION TECHNOLOGIES, 2022, 27 (02) : 1539 - 1558
  • [29] CountARFactuals - Generating Plausible Model-Agnostic Counterfactual Explanations with Adversarial Random Forests
    Dandl, Susanne
    Blesch, Kristin
    Freiesleben, Timo
    Koenig, Gunnar
    Kapar, Jan
    Bischl, Bernd
    Wright, Marvin N.
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, PT III, XAI 2024, 2024, 2155 : 85 - 107
  • [30] On the transferability of local model-agnostic explanations of machine learning models to unseen data
    Lopez Gonzalez, Alba Maria
    Garcia-Cuesta, Esteban
    IEEE CONFERENCE ON EVOLVING AND ADAPTIVE INTELLIGENT SYSTEMS 2024, IEEE EAIS 2024, 2024, : 243 - 252