XMID-MQTT: explaining machine learning-based intrusion detection system for MQTT protocol in IoT environment

被引:0
作者
Zeghida, Hayette [1 ]
Boulaiche, Mehdi [1 ]
Chikh, Ramdane [1 ]
Patel, Ahmed [2 ]
Barros, Ana Luiza Bessa [2 ]
Bamhdi, Alwi M. [3 ]
机构
[1] Univ 20 aout 1955, Dept Comp Sci, LICUS Lab, Skikda, Algeria
[2] Univ Estadual Ceara UECE, Comp Sci Program, Fortaleza, Brazil
[3] Umm Al Qura Univ, Coll Comp, Mecca, Saudi Arabia
关键词
Deep learning; Internet of things; IDS; Machine learning; MQTT; XAI; ARTIFICIAL-INTELLIGENCE; INTERNET; THINGS;
D O I
10.1007/s10207-025-01036-w
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The growing dependence on the internet of things (IoT) across diverse applications underscores the need for robust security measures to safeguard these systems from numerous cyber threats. MQTT, a lightweight messaging protocol specifically designed for IoT, is particularly vulnerable to cyberattacks due to its extensive usage and inherent security complexities. Intrusion detection systems (IDS) are pivotal in identifying and mitigating these threats. In our study, we used five classifiers to categorize network traffic as normal or one of several attack types (brute force, scan A, scan sU, Sparta) using the MQTT-IoT-IDS2020 dataset. The classifiers used include the RF, linear, and RBF SVM, CNN, and CNN-LSTM algorithms. Our findings highlight the efficacy of ML-based models in detecting MQTT intrusions, the RF classifier that demonstrates superior performance, achieving an impressive 99.9% accuracy. To provide clear and interpretable insights into AI model decisions and to understand which features most influence classifier decisions, this study introduces an innovative approach named XMID-MQTT (explaining machine learning-based intrusion detection system for MQTT protocol in an IoT environment). XMID-MQTT offers a comprehensive methodology for developing, training, and evaluating ML and DL models for multi-cyber attack classification of MQTT protocol traffic. Explainable artificial intelligence (XAI) techniques, including SHAP and LIME, are used to interpret the results of the classifier. The use of SHAP and LIME brings significant benefits by providing detailed explanations of model predictions and enhancing the AI models' transparency and interpretability. These techniques allow a deeper understanding of the model's decision-making process, identifying which features significantly impact predictions. This ensures model interpretability and fosters trust among users and stakeholders by making the AI's operations more comprehensible and reliable. Consequently, it facilitates the better adoption and integration of AI-driven IDS in IoT environments and provides a roadmap for further investigation in this evolving field. To our knowledge, this is the first study to apply XAI techniques, specifically SHAP and LIME methods, to the MQTT dataset, highlighting a key advancement in the field.
引用
收藏
页数:22
相关论文
共 49 条
[1]   BERTPerf: Inference Latency Predictor for BERT on ARM big.LITTLE Multi-Core Processors [J].
Abdelgawad, M. ;
Mozafari, S. H. ;
Clark, J. J. ;
Meyer, B. H. ;
Gross, W. J. .
2022 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2022, :1-6
[2]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[3]   Detection of Malware by Deep Learning as CNN-LSTM Machine Learning Techniques in Real Time [J].
Akhtar, Muhammad Shoaib ;
Feng, Tao .
SYMMETRY-BASEL, 2022, 14 (11)
[4]   Explainable deep learning for attack intelligence and combating cyber-physical attacks [J].
Al-Hawawreh, Muna ;
Moustafa, Nour .
AD HOC NETWORKS, 2024, 153
[5]   A survey of visual analytics for Explainable Artificial Intelligence methods [J].
Alicioglu, Gulsum ;
Sun, Bo .
COMPUTERS & GRAPHICS-UK, 2022, 102 :502-520
[6]  
Amarasinghe K, 2018, C HUM SYST INTERACT, P311, DOI 10.1109/HSI.2018.8430788
[7]   Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review [J].
Antoniadi, Anna Markella ;
Du, Yuhan ;
Guendouz, Yasmine ;
Wei, Lan ;
Mazo, Claudia ;
Becker, Brett A. ;
Mooney, Catherine .
APPLIED SCIENCES-BASEL, 2021, 11 (11)
[8]  
Apostolidis-Afentoulis V., 2015, SVM classification with linear and RBF kernels, P0
[9]  
Ashoor A.S., 2011, INT J SCI ENG RES, V2
[10]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115