Adversarial Attacks on Intrusion Detection Systems Using the LSTM Classifier

被引:3
作者
Kulikov, D. A. [1 ]
Platonov, V. V. [1 ]
机构
[1] Peter Great St Petersburg Polytech Univ, St Petersburg 195251, Russia
关键词
adversarial attack; intrusion detection system; neural network; LSTM;
D O I
10.3103/S0146411621080174
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, adversarial attacks on machine learning models and their classification are considered. Methods for assessing the resistance of a long short term memory (LSTM) classifier to adversarial attacks. Jacobian based saliency map attack (JSMA) and fast gradient sign method (FGSM) attacks chosen due to the portability of adversarial examples between machine learning models are discussed in detail. An attack of "poisoning" of the LSTM classifier is proposed. Methods of protection against the considered adversarial attacks are formulated.
引用
收藏
页码:1080 / 1086
页数:7
相关论文
共 10 条
[1]  
Daria L., 2019, 2019 3 WORLD C SMART, DOI DOI 10.1109/WORLDS4.2019.8904038
[2]  
Elsayed N, 2019, INT J ADV COMPUT SC, V10, P654
[3]  
Goodfellow I. J., 2014, INT C LEARNING REPRE
[4]   Using GRU neural network for cyber-attack detection in automated process control systems [J].
Lavrova, Dania ;
Zegzhda, Dmitry ;
Yarmak, Anastasiia .
2019 IEEE INTERNATIONAL BLACK SEA CONFERENCE ON COMMUNICATIONS AND NETWORKING (BLACKSEACOM), 2019,
[5]  
Moustafa N., The unsw-nb15 dataset
[6]  
Moustafa N, 2015, 2015 MILITARY COMMUNICATIONS AND INFORMATION SYSTEMS CONFERENCE (MILCIS)
[7]  
Nikolenko S., 2018, DEEP LEARNING DIVE W
[8]  
Szegedy C, 2014, Arxiv, DOI [arXiv:1312.6199, DOI 10.1109/CVPR.2015.7298594]
[9]  
Tabassi E., 2019, 8269 NISTIR, DOI [10.6028/NIST.IR.8269-draft, DOI 10.6028/NIST.IR.8269-DRAFT]
[10]  
Wiyanto R., 2018, ARXIV180807945CSLG