A Deeper Analysis of Adversarial Examples in Intrusion Detection

被引:7
作者
Merzouk, Mohamed Amine [1 ,2 ]
Cuppens, Frederic [3 ]
Boulahia-Cuppens, Nora [3 ]
Yaich, Reda [4 ]
机构
[1] Ecole Natl Superieure Informat, Algiers, Algeria
[2] IMT Atlantique, Rennes, France
[3] Polytechn Montreal, Montreal, PQ, Canada
[4] IRT SystemX, Plaiseau, France
来源
RISKS AND SECURITY OF INTERNET AND SYSTEMS (CRISIS 2020) | 2021年 / 12528卷
基金
欧盟地平线“2020”;
关键词
Adversarial machine learning; Adversarial examples; Intrusion detection; Evasion attacks;
D O I
10.1007/978-3-030-68887-5_4
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
During the last decade, machine learning algorithms have massively integrated the defense arsenal made available to security professionals, especially for intrusion detection. However, and despite the progress made in this area, machine learning models have been found to be vulnerable to slightly modified data samples called adversarial examples. Thereby, a small and well-computed perturbation may allow adversaries to evade intrusion detection systems. Numerous works have already successfully applied adversarial examples to network intrusion detection datasets. Yet little attention was given so far to the practicality of these examples in the implementation of end-to-end network attacks. In this paper, we study the applicability of network attacks based on adversarial examples in real networks. We minutely analyze adversarial examples generated with state-of-the-art algorithms to evaluate their consistency based on several criteria. Our results show a large proportion of invalid examples that are unlikely to lead to real attacks.
引用
收藏
页码:67 / 84
页数:18
相关论文
共 37 条
[1]  
Abou Khamis R, 2019, Arxiv, DOI arXiv:1910.14107
[2]  
Alhajjar E, 2020, Arxiv, DOI arXiv:2004.11898
[3]  
Biggio Battista, 2013, Machine Learning and Knowledge Discovery in Databases. European Conference, ECML PKDD 2013. Proceedings: LNCS 8190, P387, DOI 10.1007/978-3-642-40994-3_25
[4]  
Biggio B., 2018, PATTERN RECOGN
[5]  
Biggio B., 2012, arXiv
[6]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[7]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448
[8]  
Clements J, 2021, Arxiv, DOI arXiv:1903.11688
[9]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[10]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672