Explainable AI for Intrusion Detection Systems: A Model Development and Experts' Evaluation

被引:0
作者
Durojaye, Henry [1 ]
Naiseh, Mohammad [1 ]
机构
[1] Bournemouth Univ, Talbot, England
来源
INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, INTELLISYS 2024 | 2024年 / 1066卷
关键词
Explainable AI; Trustworthy AI; Intrusion detection systems; INTELLIGENCE;
D O I
10.1007/978-3-031-66428-1_18
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study sought to develop a transparent machine learning model for network intrusion detection that domain experts would trust for security decision-making. Intrusion detection systems using machine learning have shown promise but often lack interpretability, undermining user trust and deployment. A hybrid Random Forest/XGBoost classifier achieved over 99% accuracy and F1 score, outperforming previous literature. Post-hoc LIME explanations provided feature effect transparency. Nine domain experts from technical roles then evaluated the model's reliability, explainability, and trustworthiness through a standardised process. While over half found the model reliable, one-third expressed uncertainty. Responses on performance explanations and trustworthiness assessments also varied thus suggesting opportunities to strengthen reliability communications and consolidate diverse perspectives. To optimise user confidence and model deployment, refinements targeting consistent explainability across audiences were proposed. Overall, high predictive performance validated effectiveness, but variable viewpoints from evaluations indicated the need to bolster reliability and trust explanations. With continued iterative evaluation and enhancements, this research framework holds promise for developing interpretable machine learning solutions trusted for complex security decision-making.
引用
收藏
页码:301 / 318
页数:18
相关论文
共 27 条
[1]  
Akhai S., 2023, From Black Boxes to Transparent Machines: The Quest for Explainable AI
[2]  
Alrawashdeh K, 2016, 2016 15TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2016), P195, DOI [10.1109/ICMLA.2016.0040, 10.1109/ICMLA.2016.167]
[3]   The Comparison of Cybersecurity Datasets [J].
Alshaibi, Ahmed ;
Al-Ani, Mustafa ;
Al-Azzawi, Abeer ;
Konev, Anton ;
Shelupanov, Alexander .
DATA, 2022, 7 (02)
[4]   A novel decision support system for managing predictive maintenance strategies based on machine learning approaches [J].
Arena, S. ;
Florian, E. ;
Zennaro, I ;
Orru, P. F. ;
Sgarbossa, F. .
SAFETY SCIENCE, 2022, 146
[5]   Sampling Methods [J].
Berndt, Andrea E. .
JOURNAL OF HUMAN LACTATION, 2020, 36 (02) :224-226
[6]  
Carlos A.C., 2019, Sciencedirect.com, V231, P600
[7]  
Cutillo C.M., 2020, NPJ Digital Med, V3, P47
[8]   Deep Learning for 3D Point Clouds: A Survey [J].
Guo, Yulan ;
Wang, Hanyun ;
Hu, Qingyong ;
Liu, Hao ;
Liu, Li ;
Bennamoun, Mohammed .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (12) :4338-4364
[9]   An Empirical Study of Model-Agnostic Techniques for Defect Prediction Models [J].
Jiarpakdee, Jirayus ;
Tantithamthavorn, Chakkrit ;
Dam, Hoa Khanh ;
Grundy, John .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2022, 48 (01) :166-185
[10]   Suicide prediction models: a critical review of recent research with recommendations for the way forward [J].
Kessler, Ronald C. ;
Bossarte, Robert M. ;
Luedtke, Alex ;
Zaslavsky, Alan M. ;
Zubizarreta, Jose R. .
MOLECULAR PSYCHIATRY, 2020, 25 (01) :168-179