Rallying Adversarial Techniques against Deep Learning for Network Security

被引:27
作者
Clements, Joseph [1 ]
Yang, Yuzhe [1 ]
Sharma, Ankur A. [1 ]
Hu, Hongxin [2 ]
Lao, Yingjie [1 ]
机构
[1] Clemson Univ, Clemson, SC 29634 USA
[2] Univ Buffalo, Buffalo, NY USA
来源
2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021) | 2021年
基金
美国国家科学基金会;
关键词
Network Intrusion Detection System; Adversarial Machine Learning; Adversarial Examples; Deep Learning; MALWARE DETECTION; MACHINE;
D O I
10.1109/SSCI50451.2021.9660011
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in artificial intelligence and the increasing need for robust defensive measures in network security have led to the adoption of deep learning approaches for network intrusion detection systems (NIDS). These methods have achieved superior performance against conventional network attacks, enabling unique and dynamic security systems in real-world applications. Adversarial machine learning, unfortunately, has recently shown that deep learning models are inherently vulnerable to adversarial modifications on their input data. In this work, we explore the potential of adversarial entities to compromise such vulnerabilities to compromise deep learning-based NIDS systems. Specifically, we show that by modifying on average as little as 1.38 of an observed packet's input features, an adversary can generate malicious inputs that effectively fool a target deep learning-based NIDS. Therefore, it is crucial to consider the performance from the conventional network security perspective and the adversarial machine learning domain when designing such systems.
引用
收藏
页数:8
相关论文
共 42 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]   Deep recurrent neural network for IoT intrusion detection system [J].
Almiani, Muder ;
AbuGhazleh, Alia ;
Al-Rahayfeh, Amer ;
Atiewi, Saleh ;
Razaque, Abdul .
SIMULATION MODELLING PRACTICE AND THEORY, 2020, 101
[3]  
[Anonymous], 2019, ARXIV PREPRINT ARXIV
[4]  
Apruzzese G, 2018, INT CONF CYBER CONFL, P371, DOI 10.23919/CYCON.2018.8405026
[5]   Fuzziness based semi-supervised learning approach for intrusion detection system [J].
Ashfaq, Rana Aamir Raza ;
Wang, Xi-Zhao ;
Huang, Joshua Zhexue ;
Abbas, Haider ;
He, Yu-Lin .
INFORMATION SCIENCES, 2017, 378 :484-497
[6]  
Brendel W., 2018, ARXIV PREPRINT ARXIV
[7]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[8]   Audio Adversarial Examples: Targeted Attacks on Speech-to-Text [J].
Carlini, Nicholas ;
Wagner, David .
2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, :1-7
[9]  
Carlini Nicholas, 2019, arXiv preprint arXiv:1902.06705
[10]  
Chen PY, 2018, AAAI CONF ARTIF INTE, P10