Machine Learning Explainability for Intrusion Detection in the Industrial Internet of Things

被引:5
作者
Ahakonye L.A.C. [1 ]
Nwakanma C.I. [1 ]
Lee J.M. [2 ]
Kim D.-S. [2 ]
机构
[1] Kumoh National Institute of Techonology, ICT-Convergence Research Center
[2] Kumoh National Institute of Techonology, Department of It Convergence Engineering
来源
IEEE Internet of Things Magazine | 2024年 / 7卷 / 03期
关键词
Cybersecurity - Decision making - Intrusion detection - Lime - Machine learning;
D O I
10.1109/IOTM.001.2300171
中图分类号
学科分类号
摘要
Intrusion and attacks have consistently challenged the Industrial Internet of Things (IIoT). Although artificial intelligence (AI) rapidly develops in attack detection and mitigation, building convincing trust is difficult due to its black-box nature. Its unexplained outcome inhibits informed and adequate decision-making of the experts and stakeholders. Explainable AI (XAI) has emerged to help with this problem. However, the ease of comprehensibility of XAI interpretation remains an issue due to the complexity and reliance on statistical theories. This study integrates agnostic post-hoc LIME and SHAP explainability approaches on intrusion detection systems built using representative AI models to explain the model's decisions and provide more insights into interpretability. The decision and confidence impact ratios assessed the significance of features and model dependencies, enhancing cybersecurity experts' trust and informed decisions. The experimental findings highlight the importance of these explainability techniques for understanding and mitigating IIoT intrusions with recourse to significant data features and model decisions. © 2018 IEEE.
引用
收藏
页码:68 / 74
页数:6
相关论文
共 15 条
[1]  
El Houda Z.A., Brik B., Senouci S.-M., A Novel IoT-Based Explainable Deep Learning Framework for Intrusion Detection Systems, IEEE Internet of Things Mag., 5, 2, pp. 20-23, (2022)
[2]  
Ahakonye L.A.C., Et al., Agnostic CH-DT Technique for SCADA Network High-Dimensional Data-Aware Intrusion Detection System, IEEE Internet of Things J., 10, 12, pp. 10344-10356, (2023)
[3]  
Zolanvari M., Et al., Trust XAI: Model-Agnostic Explanations for AI with A Case Study on IIoT Security, IEEE Internet of Things J., (2021)
[4]  
Neupane S., Et al., Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities, IEEE Access, 10, pp. 112392-112415, (2022)
[5]  
Nwakanma C.I., Et al., Explainable Artificial Intelligence (XAI) for Intrusion Detection and Mitigation in Intelligent Connected Vehicles: A Review, Applied Sciences, 13, 3, (2023)
[6]  
Al-Hawawreh M., Moustafa N., Explainable Deep Learning for Attack Intelligence and Combating Cyber-Physical Attacks, Ad Hoc Networks, 153, (2024)
[7]  
Capuano N., Et al., Explainable Artificial Intelligence in Cyber-Security: A Survey, IEEE Access, 10, pp. 93575-93600, (2022)
[8]  
Ribeiro M.T., Singh S., Guestrin C., Why Should I Trust You? Explaining the Predictions of Any Classifier, Proc. 22nd ACM SIGKDD Int'l. Conf. Knowledge Discovery and Data Mining, pp. 1135-1144, (2016)
[9]  
Barnard P., Marchetti N., DaSilva L.A., Robust Network Intrusion Detection Through Explainable Artificial Intelligence (XAI), IEEE Networking Letters, 4, 3, pp. 167-171, (2022)
[10]  
Zou L., Et al., Ensemble Image Explainable AI (XAI) Algorithm for Severe Community-Acquired Pneumonia and COVID-19 Respiratory Infections, IEEE Trans. Artificial Intelligence, 4, 2, pp. 242-254, (2022)