Improving IoT Security With Explainable AI: Quantitative Evaluation of Explainability for IoT Botnet Detection

被引:9
|
作者
Kalakoti, Rajesh [1 ]
Bahsi, Hayretdin [1 ,2 ]
Nomm, Sven [1 ]
机构
[1] Tallinn Univ Technol, Dept Software Sci, EE-12616 Tallinn, Estonia
[2] No Arizona Univ, Sch Informat Comp & Cyber Syst, Flagstaff, AZ 86011 USA
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 10期
关键词
Botnet; Internet of Things; Task analysis; Explainable AI; Feature extraction; Complexity theory; Artificial neural networks; complexity; consistency; explainable artificial intelligence (XAI); faithfulness; feature importance; Internet of Things (IoT); local interpretable model-agnostic explanations (LIME); posthoc XAI; robustness; Shapley additive explanation (SHAP); ARTIFICIAL-INTELLIGENCE; SELECTION; INTERNET; THINGS;
D O I
10.1109/JIOT.2024.3360626
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Detecting botnets is an essential task to ensure the security of Internet of Things (IoT) systems. Machine learning (ML)-based approaches have been widely used for this purpose, but the lack of interpretability and transparency of the models often limits their effectiveness. In this research paper, our aim is to improve the transparency and interpretability of high-performance ML models for IoT botnet detection by selecting higher quality explanations using explainable artificial intelligence (XAI) techniques. We used three data sets to induce binary and multiclass classification models for IoT botnet detection, with sequential backward selection (SBS) employed as the feature selection technique. We then use two post hoc XAI techniques such as local interpretable model-agnostic explanations (LIME) and Shapley additive explanation (SHAP), to explain the behavior of the models. To evaluate the quality of explanations generated by XAI methods, we employed faithfulness, monotonicity, complexity, and sensitivity metrics. ML models employed in this work achieve very high detection rates with a limited number of features. Our findings demonstrate the effectiveness of XAI methods in improving the interpretability and transparency of ML-based IoT botnet detection models. Specifically, explanations generated by applying LIME and SHAP to the extreme gradient boosting model yield high faithfulness, high consistency, low complexity, and low sensitivity. Furthermore, SHAP outperforms LIME by achieving better results in these metrics.
引用
收藏
页码:18237 / 18254
页数:18
相关论文
共 50 条
  • [41] Towards Quantitative Evaluation of Explainable AI Methods for Deepfake Detection
    Tsigos, Konstantinos
    Apostolidis, Evlampios
    Baxevanakis, Spyridon
    Papadopoulos, Symeon
    Mezaris, Vasileios
    PROCEEDINGS OF THE 3RD ACM INTERNATIONAL WORKSHOP ON MULTIMEDIA AI AGAINST DISINFORMATION, MAD 2024, 2024, : 37 - 45
  • [42] An Explainable Intrusion Detection System for IoT Networks
    Fazzolari, Michela
    Ducange, Pietro
    Marcelloni, Francesco
    2023 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, FUZZ, 2023,
  • [43] When explainable AI meets IoT applications for supervised learning
    Djenouri, Youcef
    Belhadi, Asma
    Srivastava, Gautam
    Lin, Jerry Chun-Wei
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2023, 26 (04): : 2313 - 2323
  • [44] When explainable AI meets IoT applications for supervised learning
    Youcef Djenouri
    Asma Belhadi
    Gautam Srivastava
    Jerry Chun-Wei Lin
    Cluster Computing, 2023, 26 : 2313 - 2323
  • [45] Explainable AI for Human-Centric Ethical IoT Systems
    Ambritta, P. Nancy
    Mahalle, Parikshit N.
    Patil, Rajkumar V.
    Dey, Nilanjan
    Crespo, Ruben Gonzalez
    Sherratt, R. Simon
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (03) : 3407 - 3419
  • [46] Internet of Behavior and Explainable AI Systems for Influencing IoT Behavior
    Elayan, Haya
    Aloqaily, Moayad
    Karray, Fakhri
    Guizani, Mohsen
    IEEE NETWORK, 2023, 37 (01): : 62 - 68
  • [47] Security Risks Concerns of Generative AI in the IoT
    Xu H.
    Li Y.
    Balogun O.
    Wu S.
    Wang Y.
    Cai Z.
    IEEE Internet of Things Magazine, 2024, 7 (03): : 62 - 67
  • [48] Rule-Based Detection of Anomalous Patterns in Device Behavior for Explainable IoT Security
    Costa, Gianni
    Forestiero, Agostino
    Ortale, Riccardo
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (06) : 4514 - 4525
  • [49] IoT Security: AI Blockchaining Solutions and Practices
    Rajan, Hephzibah
    Burns, John
    Jaiswal, Chetan
    2023 IEEE 13TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE, CCWC, 2023, : 396 - 401
  • [50] AI security and cyber risk in IoT systems
    Radanliev, Petar
    De Roure, David
    Maple, Carsten
    Nurse, Jason R. C.
    Nicolescu, Razvan
    Ani, Uchenna
    FRONTIERS IN BIG DATA, 2024, 7