E-XAI: Evaluating Black-Box Explainable AI Frameworks for Network Intrusion Detection

被引:26
作者
Arreche, Osvaldo [1 ]
Guntur, Tanish R. [2 ]
Roberts, Jack W. [3 ]
Abdallah, Mustafa [3 ]
机构
[1] Indiana Univ Purdue Univ Indianapolis IUPUI, Purdue Sch Engn & Technol, Dept Elect & Comp Engn, Indianapolis, IN 46202 USA
[2] Indiana Univ Purdue Univ Indianapolis IUPUI, Dept Comp & Informat Sci, Indianapolis, IN 46202 USA
[3] Indiana Univ Purdue Univ Indianapolis IUPUI, Purdue Sch Engn & Technol, Dept Comp & Informat Technol, Indianapolis, IN 46202 USA
关键词
XAI evaluation; intrusion detection systems; SHAP; explainable AI; network security; LIME; black-box AI; CICIDS-2017; RoEduNet-SIMARGL2021;
D O I
10.1109/ACCESS.2024.3365140
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The exponential growth of intrusions on networked systems inspires new research directions on developing artificial intelligence (AI) techniques for intrusion detection systems (IDS). In particular, the need to understand and explain these AI models to security analysts (managing these IDS to safeguard their networks) motivates the usage of explainable AI (XAI) methods in real-world IDS. In this work, we propose an end-to-end framework to evaluate black-box XAI methods for network IDS. We evaluate both global and local scopes for these black-box XAI methods for network intrusion detection. We analyze six different evaluation metrics for two popular black-box XAI techniques, namely SHAP and LIME. These metrics are descriptive accuracy, sparsity, stability, efficiency, robustness, and completeness. They cover main metrics from network security and AI domains. We evaluate our XAI evaluation framework using three popular network intrusion datasets and seven AI methods with different characteristics. We release our codes for the network security community to access it as a baseline XAI framework for network IDS. Our framework shows the limitations and strengths of current black-box XAI methods when applied to network IDS.
引用
收藏
页码:23954 / 23988
页数:35
相关论文
共 72 条
[1]  
Ahlashkari, 2021, Cicflowmeter/readme.txt at master ahlashkari/cicflowmeter
[2]   An Intelligent Tree-Based Intrusion Detection Model for Cyber Security [J].
Al-Omari, Mohammad ;
Rawashdeh, Majdi ;
Qutaishat, Fadi ;
Alshira'H, Mohammad ;
Ababneh, Nedal .
JOURNAL OF NETWORK AND SYSTEMS MANAGEMENT, 2021, 29 (02)
[3]  
Amit I, 2019, Arxiv, DOI arXiv:1812.07858
[4]  
Amor N.B., 2004, P ACM S APPL COMP, P420
[5]   Reducing over-optimism in variable selection by cross-model validation [J].
Anderssen, Endre ;
Dyrstad, Knut ;
Westad, Frank ;
Martens, Harald .
CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2006, 84 (1-2) :69-74
[6]   A Survey on IoT Intrusion Detection: Federated Learning, Game Theory, Social Psychology, and Explainable AI as Future Directions [J].
Arisdakessian, Sarhad ;
Wahab, Omar Abdel ;
Mourad, Azzam ;
Otrok, Hadi ;
Guizani, Mohsen .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (05) :4059-4092
[7]   A Hybrid Intrusion Detection Model Using EGA-PSO and Improved Random Forest Method [J].
Balyan, Amit Kumar ;
Ahuja, Sachin ;
Lilhore, Umesh Kumar ;
Sharma, Sanjeev Kumar ;
Manoharan, Poongodi ;
Algarni, Abeer D. ;
Elmannai, Hela ;
Raahemifar, Kaamran .
SENSORS, 2022, 22 (16)
[8]   Challenges and pitfalls in malware research [J].
Botacin, Marcus ;
Ceschin, Fabricio ;
Sun, Ruimin ;
Oliveira, Daniela ;
Gregio, Andre .
COMPUTERS & SECURITY, 2021, 106
[9]   A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection [J].
Buczak, Anna L. ;
Guven, Erhan .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2016, 18 (02) :1153-1176
[10]   Explainable Artificial Intelligence in CyberSecurity: A Survey [J].
Capuano, Nicola ;
Fenza, Giuseppe ;
Loia, Vincenzo ;
Stanzione, Claudio .
IEEE ACCESS, 2022, 10 :93575-93600