TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security

被引:53
|
作者
Zolanvari, Maede [1 ]
Yang, Zebo [1 ]
Khan, Khaled [2 ]
Jain, Raj [1 ]
Meskin, Nader [2 ]
机构
[1] Washington Univ, Dept Comp Sci & Engn, St Louis, MO 63130 USA
[2] Qatar Univ, Dept Comp Sci & Engn, Doha, Qatar
关键词
Artificial intelligence; Industrial Internet of Things; Numerical models; Mathematical models; Computational modeling; Data models; Predictive models; Artificial intelligence (AI); cybersecurity; explainable AI (XAI); Industrial Internet of Things (IIoT); machine learning (ML); statistical modeling; trustworthy AI; MACHINE; INTERNET;
D O I
10.1109/JIOT.2021.3122019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite artificial intelligence (AI)'s significant growth, its "black box" nature creates challenges in generating adequate trust. Thus, it is seldom utilized as a standalone unit in IoT high-risk applications, such as critical industrial infrastructures, medical systems, financial applications, etc. Explainable AI (XAI) has emerged to help with this problem. However, designing appropriately fast and accurate XAI is still challenging, especially in numerical applications. Here, we propose a universal XAI model, named the transparency relying upon statistical theory (TRUST), which is model-agnostic, high performing, and suitable for numerical applications. Simply put, TRUST XAI models the statistical behavior of the AI's outputs in an AI-based system. Factor analysis is used to transform the input features into a new set of latent variables. We use mutual information (MI) to rank these variables and pick only the most influential ones on the AI's outputs and call them "representatives" of the classes. Then, we use multimodal Gaussian (MMG) distributions to determine the likelihood of any new sample belonging to each class. We demonstrate the effectiveness of TRUST in a case study on cybersecurity of the Industrial Internet of Things (IIoT) using three different cybersecurity data sets. As IIoT is a prominent application that deals with numerical data. The results show that TRUST XAI provides explanations for new random samples with an average success rate of 98%. Compared with local interpretable model-agnostic explanations (LIME), a popular XAI model, TRUST is shown to be superior in the context of performance, speed, and the method of explainability. In the end, we also show how TRUST is explained to the user.
引用
收藏
页码:2967 / 2978
页数:12
相关论文
共 50 条
  • [1] Model-agnostic explanations for survival prediction models
    Suresh, Krithika
    Gorg, Carsten
    Ghosh, Debashis
    STATISTICS IN MEDICINE, 2024, 43 (11) : 2161 - 2182
  • [2] Model-Agnostic Counterfactual Explanations in Credit Scoring
    Dastile, Xolani
    Celik, Turgay
    Vandierendonck, Hans
    IEEE ACCESS, 2022, 10 : 69543 - 69554
  • [3] Model-Agnostic Counterfactual Explanations for Consequential Decisions
    Karimi, Amir-Hossein
    Barthe, Gilles
    Balle, Borja
    Valera, Isabel
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 895 - 904
  • [4] Semantic Reasoning from Model-Agnostic Explanations
    Perdih, Timen Stepisnik
    Lavrac, Nada
    Skrlj, Blaz
    2021 IEEE 19TH WORLD SYMPOSIUM ON APPLIED MACHINE INTELLIGENCE AND INFORMATICS (SAMI 2021), 2021, : 105 - 110
  • [5] Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support
    Meske, Christian
    Bunde, Enrico
    ARTIFICIAL INTELLIGENCE IN HCI, AI-HCI 2020, 2020, 12217 : 54 - 69
  • [6] A comparative study of methods for estimating model-agnostic Shapley value explanations
    Olsen, Lars Henry Berge
    Glad, Ingrid Kristine
    Jullum, Martin
    Aas, Kjersti
    DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 38 (04) : 1782 - 1829
  • [7] Model-agnostic and diverse explanations for streaming rumour graphs
    Nguyen, Thanh Tam
    Phan, Thanh Cong
    Nguyen, Minh Hieu
    Weidlich, Matthias
    Yin, Hongzhi
    Jo, Jun
    Nguyen, Quoc Viet Hung
    KNOWLEDGE-BASED SYSTEMS, 2022, 253
  • [8] Anchors: High-Precision Model-Agnostic Explanations
    Ribeiro, Marco Tulio
    Singh, Sameer
    Guestrin, Carlos
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 1527 - 1535
  • [9] Model-Agnostic Explanations using Minimal Forcing Subsets
    Han, Xing
    Ghosh, Joydeep
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [10] Learning Model-Agnostic Counterfactual Explanations for Tabular Data
    Pawelczyk, Martin
    Broelemann, Klaus
    Kasneci, Gjergji
    WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 3126 - 3132