TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security

被引:53
|
作者
Zolanvari, Maede [1 ]
Yang, Zebo [1 ]
Khan, Khaled [2 ]
Jain, Raj [1 ]
Meskin, Nader [2 ]
机构
[1] Washington Univ, Dept Comp Sci & Engn, St Louis, MO 63130 USA
[2] Qatar Univ, Dept Comp Sci & Engn, Doha, Qatar
关键词
Artificial intelligence; Industrial Internet of Things; Numerical models; Mathematical models; Computational modeling; Data models; Predictive models; Artificial intelligence (AI); cybersecurity; explainable AI (XAI); Industrial Internet of Things (IIoT); machine learning (ML); statistical modeling; trustworthy AI; MACHINE; INTERNET;
D O I
10.1109/JIOT.2021.3122019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite artificial intelligence (AI)'s significant growth, its "black box" nature creates challenges in generating adequate trust. Thus, it is seldom utilized as a standalone unit in IoT high-risk applications, such as critical industrial infrastructures, medical systems, financial applications, etc. Explainable AI (XAI) has emerged to help with this problem. However, designing appropriately fast and accurate XAI is still challenging, especially in numerical applications. Here, we propose a universal XAI model, named the transparency relying upon statistical theory (TRUST), which is model-agnostic, high performing, and suitable for numerical applications. Simply put, TRUST XAI models the statistical behavior of the AI's outputs in an AI-based system. Factor analysis is used to transform the input features into a new set of latent variables. We use mutual information (MI) to rank these variables and pick only the most influential ones on the AI's outputs and call them "representatives" of the classes. Then, we use multimodal Gaussian (MMG) distributions to determine the likelihood of any new sample belonging to each class. We demonstrate the effectiveness of TRUST in a case study on cybersecurity of the Industrial Internet of Things (IIoT) using three different cybersecurity data sets. As IIoT is a prominent application that deals with numerical data. The results show that TRUST XAI provides explanations for new random samples with an average success rate of 98%. Compared with local interpretable model-agnostic explanations (LIME), a popular XAI model, TRUST is shown to be superior in the context of performance, speed, and the method of explainability. In the end, we also show how TRUST is explained to the user.
引用
收藏
页码:2967 / 2978
页数:12
相关论文
共 50 条
  • [41] Improving Object Recognition in Crime Scenes via Local Interpretable Model-Agnostic Explanations
    Farhood, Helia
    Saberi, Morteza
    Najafi, Mohammad
    2021 IEEE 25TH INTERNATIONAL ENTERPRISE DISTRIBUTED OBJECT COMPUTING CONFERENCE WORKSHOPS (EDOCW 2021), 2021, : 90 - 94
  • [42] Evaluating Local Interpretable Model-Agnostic Explanations on Clinical Machine Learning Classification Models
    Kumarakulasinghe, Nesaretnam Barr
    Blomberg, Tobias
    Lin, Jintai
    Leao, Alexandra Saraiva
    Papapetrou, Panagiotis
    2020 IEEE 33RD INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS(CBMS 2020), 2020, : 7 - 12
  • [43] Explain the Explainer: Interpreting Model-Agnostic Counterfactual Explanations of a Deep Reinforcement Learning Agent
    Chen Z.
    Silvestri F.
    Tolomei G.
    Wang J.
    Zhu H.
    Ahn H.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1443 - 1457
  • [44] CBR-LIME: A Case-Based Reasoning Approach to Provide Specific Local Interpretable Model-Agnostic Explanations
    Recio-Garcia, Juan A.
    Diaz-Agudo, Belen
    Pino-Castilla, Victor
    CASE-BASED REASONING RESEARCH AND DEVELOPMENT, ICCBR 2020, 2020, 12311 : 179 - 194
  • [45] Model-Agnostic Machine Learning Model Updating - A Case Study on a real-world Application
    Poray, Julia
    Franczyk, Bogdan
    Heller, Thomas
    2024 19TH CONFERENCE ON COMPUTER SCIENCE AND INTELLIGENCE SYSTEMS, FEDCSIS 2024, 2024, : 157 - 167
  • [46] Pleural effusion diagnosis using local interpretable model-agnostic explanations and convolutional neural network
    Nguyen H.T.
    Nguyen C.N.T.
    Phan T.M.N.
    Dao T.C.
    IEIE Transactions on Smart Processing and Computing, 2021, 10 (02): : 101 - 108
  • [47] ASTERYX: A model-Agnostic SaT-basEd appRoach for sYmbolic and score-based eXplanations
    Boumazouza, Ryma
    Cheikh-Alili, Fahima
    Mazure, Bertrand
    Tabia, Karim
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 120 - 129
  • [48] Applying local interpretable model-agnostic explanations to identify substructures that are responsible for mutagenicity of chemical compounds
    Rosa, Lucca Caiaffa Santos
    Pimentel, Andre Silva
    MOLECULAR SYSTEMS DESIGN & ENGINEERING, 2024, 9 (09): : 920 - 936
  • [49] "I do not know! but why?"- Local model-agnostic example-based explanations of reject
    Artelt, Andre
    Visser, Roel
    Hammer, Barbara
    NEUROCOMPUTING, 2023, 558
  • [50] Real-Time, Model-Agnostic and User-Driven Counterfactual Explanations Using Autoencoders
    Soto, Jokin Labaien
    Uriguen, Ekhi Zugasti
    Garcia, Xabier De Carlos
    APPLIED SCIENCES-BASEL, 2023, 13 (05):