TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security

被引:53
|
作者
Zolanvari, Maede [1 ]
Yang, Zebo [1 ]
Khan, Khaled [2 ]
Jain, Raj [1 ]
Meskin, Nader [2 ]
机构
[1] Washington Univ, Dept Comp Sci & Engn, St Louis, MO 63130 USA
[2] Qatar Univ, Dept Comp Sci & Engn, Doha, Qatar
关键词
Artificial intelligence; Industrial Internet of Things; Numerical models; Mathematical models; Computational modeling; Data models; Predictive models; Artificial intelligence (AI); cybersecurity; explainable AI (XAI); Industrial Internet of Things (IIoT); machine learning (ML); statistical modeling; trustworthy AI; MACHINE; INTERNET;
D O I
10.1109/JIOT.2021.3122019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite artificial intelligence (AI)'s significant growth, its "black box" nature creates challenges in generating adequate trust. Thus, it is seldom utilized as a standalone unit in IoT high-risk applications, such as critical industrial infrastructures, medical systems, financial applications, etc. Explainable AI (XAI) has emerged to help with this problem. However, designing appropriately fast and accurate XAI is still challenging, especially in numerical applications. Here, we propose a universal XAI model, named the transparency relying upon statistical theory (TRUST), which is model-agnostic, high performing, and suitable for numerical applications. Simply put, TRUST XAI models the statistical behavior of the AI's outputs in an AI-based system. Factor analysis is used to transform the input features into a new set of latent variables. We use mutual information (MI) to rank these variables and pick only the most influential ones on the AI's outputs and call them "representatives" of the classes. Then, we use multimodal Gaussian (MMG) distributions to determine the likelihood of any new sample belonging to each class. We demonstrate the effectiveness of TRUST in a case study on cybersecurity of the Industrial Internet of Things (IIoT) using three different cybersecurity data sets. As IIoT is a prominent application that deals with numerical data. The results show that TRUST XAI provides explanations for new random samples with an average success rate of 98%. Compared with local interpretable model-agnostic explanations (LIME), a popular XAI model, TRUST is shown to be superior in the context of performance, speed, and the method of explainability. In the end, we also show how TRUST is explained to the user.
引用
收藏
页码:2967 / 2978
页数:12
相关论文
共 50 条
  • [31] A novel dataset and local interpretable model-agnostic explanations (LIME) for monkeypox prediction
    Sharma, Nonita
    Mohanty, Sachi Nandan
    Mahato, Shalini
    Pattanaik, Chinmaya Ranjan
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2023, 17 (04): : 1297 - 1308
  • [32] Sharpening Local Interpretable Model-Agnostic Explanations for Histopathology: Improved Understandability and Reliability
    Graziani, Mara
    de Sousa, Iam Palatnik
    Vellasco, Marley M. B. R.
    da Silva, Eduardo Costa
    Muller, Henning
    Andrearczyk, Vincent
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT III, 2021, 12903 : 540 - 549
  • [33] Deep Learning Explainability with Local Interpretable Model-Agnostic Explanations for Monkeypox Prediction
    Angmo, Motup
    Sharma, Nonita
    Mohanty, Sachi Nandan
    Ijaz Khan, M.
    Mamatov, Abdugafur
    Kallel, Mohamed
    JOURNAL OF MECHANICS IN MEDICINE AND BIOLOGY, 2025,
  • [34] Individualized help for at-risk students using model-agnostic and counterfactual explanations
    Bevan I. Smith
    Charles Chimedza
    Jacoba H. Bührmann
    Education and Information Technologies, 2022, 27 : 1539 - 1558
  • [35] Interpretable Human Activity Recognition With Temporal Convolutional Networks and Model-Agnostic Explanations
    Bijalwan, Vishwanath
    Khan, Abdul Manan
    Baek, Hangyeol
    Jeon, Sangmin
    Kim, Youngshik
    IEEE SENSORS JOURNAL, 2024, 24 (17) : 27607 - 27617
  • [36] Explaining Black Boxes With a SMILE: Statistical Model-Agnostic Interpretability With Local Explanations
    Aslansefat, Koorosh
    Hashemian, Mojgan
    Walker, Martin
    Akram, Mohammed Naveed
    Sorokos, Ioannis
    Papadopoulos, Yiannis
    IEEE SOFTWARE, 2024, 41 (01) : 87 - 97
  • [37] Mechanism-aware and multimodal AI: beyond model-agnostic interpretation
    Occhipinti, Annalis
    Verma, Suraj
    Doan, Le Minh Thao
    Angione, Claudio
    TRENDS IN CELL BIOLOGY, 2024, 34 (02) : 85 - 89
  • [38] Improving understandability of feature contributions in model-agnostic explainable AI tools
    Hadash, Sophia
    Willemsen, Martijn C.
    Snijders, Chris
    IJsselsteijn, Wijnand A.
    PROCEEDINGS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI' 22), 2022,
  • [39] Assessment of Software Vulnerability Contributing Factors by Model-Agnostic Explainable AI
    Li, Ding
    Liu, Yan
    Huang, Jun
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (02): : 1087 - 1113
  • [40] Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance Computing
    Chen, Zhouyuan
    Lian, Zhichao
    Xu, Zhe
    AXIOMS, 2023, 12 (10)