Improving IoT Security With Explainable AI: Quantitative Evaluation of Explainability for IoT Botnet Detection

被引:9
|
作者
Kalakoti, Rajesh [1 ]
Bahsi, Hayretdin [1 ,2 ]
Nomm, Sven [1 ]
机构
[1] Tallinn Univ Technol, Dept Software Sci, EE-12616 Tallinn, Estonia
[2] No Arizona Univ, Sch Informat Comp & Cyber Syst, Flagstaff, AZ 86011 USA
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 10期
关键词
Botnet; Internet of Things; Task analysis; Explainable AI; Feature extraction; Complexity theory; Artificial neural networks; complexity; consistency; explainable artificial intelligence (XAI); faithfulness; feature importance; Internet of Things (IoT); local interpretable model-agnostic explanations (LIME); posthoc XAI; robustness; Shapley additive explanation (SHAP); ARTIFICIAL-INTELLIGENCE; SELECTION; INTERNET; THINGS;
D O I
10.1109/JIOT.2024.3360626
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Detecting botnets is an essential task to ensure the security of Internet of Things (IoT) systems. Machine learning (ML)-based approaches have been widely used for this purpose, but the lack of interpretability and transparency of the models often limits their effectiveness. In this research paper, our aim is to improve the transparency and interpretability of high-performance ML models for IoT botnet detection by selecting higher quality explanations using explainable artificial intelligence (XAI) techniques. We used three data sets to induce binary and multiclass classification models for IoT botnet detection, with sequential backward selection (SBS) employed as the feature selection technique. We then use two post hoc XAI techniques such as local interpretable model-agnostic explanations (LIME) and Shapley additive explanation (SHAP), to explain the behavior of the models. To evaluate the quality of explanations generated by XAI methods, we employed faithfulness, monotonicity, complexity, and sensitivity metrics. ML models employed in this work achieve very high detection rates with a limited number of features. Our findings demonstrate the effectiveness of XAI methods in improving the interpretability and transparency of ML-based IoT botnet detection models. Specifically, explanations generated by applying LIME and SHAP to the extreme gradient boosting model yield high faithfulness, high consistency, low complexity, and low sensitivity. Furthermore, SHAP outperforms LIME by achieving better results in these metrics.
引用
收藏
页码:18237 / 18254
页数:18
相关论文
共 50 条
  • [1] Improving Transparency and Explainability of Deep Learning based IoT Botnet Detection using Explainable Artificial Intelligence (XAI)
    Kalakoti, Rajesh
    Mi, Sven
    Bahsi, Hayretdin
    22ND IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA 2023, 2023, : 595 - 601
  • [2] Explainable Federated Learning for Botnet Detection in IoT Networks
    Kalakoti, Rajesh
    Bahsi, Hayretdin
    Nomm, Sven
    2024 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE, CSR, 2024, : 22 - 29
  • [3] A survey on IoT application layer protocols, security challenges, and the role of explainable AI in IoT (XAIoT)
    Quincozes, Vagner E.
    Quincozes, Silvio E.
    Kazienko, Juliano F.
    Gama, Simone
    Cheikhrouhou, Omar
    Koubaa, Anis
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2024, 23 (03) : 1975 - 2002
  • [4] XAI-IoT: An Explainable AI Framework for Enhancing Anomaly Detection in IoT Systems
    Namrita Gummadi, Anna
    Napier, Jerry C.
    Abdallah, Mustafa
    IEEE ACCESS, 2024, 12 : 71024 - 71054
  • [5] Enhancing IoT Botnet Attack Detection in SOCs with an Explainable Active Learning Framework
    Kalakoti, Rajesh
    Nomm, Sven
    Bahsi, Hayretdin
    2024 IEEE 5TH ANNUAL WORLD AI IOT CONGRESS, AIIOT 2024, 2024, : 0265 - 0272
  • [6] Leveraging Explainable AI for Actionable Insights in IoT Intrusion Detection
    Gyawali, Sohan
    Huang, Jiaqi
    Jiang, Yili
    2024 19TH ANNUAL SYSTEM OF SYSTEMS ENGINEERING CONFERENCE, SOSE 2024, 2024, : 92 - 97
  • [7] The Mirai Botnet and the Importance of IoT Device Security
    Eustis, Alexander G.
    16TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY-NEW GENERATIONS (ITNG 2019), 2019, 800 : 85 - 89
  • [8] Toward Improving the Security of IoT and CPS Devices: An AI Approach
    Albasir, Abdurhman
    Naik, Kshirasagar
    Manzano, Ricardo
    DIGITAL THREATS: RESEARCH AND PRACTICE, 2023, 4 (02):
  • [9] A systematic evaluation of white-box explainable AI methods for anomaly detection in IoT systems
    Gummadi, Anna N.
    Arreche, Osvaldo
    Abdallah, Mustafa
    INTERNET OF THINGS, 2025, 30
  • [10] BotStop : Packet-based efficient and explainable IoT botnet detection using machine learning
    Alani, Mohammed M.
    COMPUTER COMMUNICATIONS, 2022, 193 : 53 - 62