Impacting Robustness in Deep Learning-Based NIDS through Poisoning Attacks

被引:2
作者
Alahmed, Shahad [1 ]
Alasad, Qutaiba [2 ]
Yuan, Jiann-Shiun [3 ]
Alawad, Mohammed [4 ]
机构
[1] Tikrit Univ, Dept Comp Sci, POB 42, Al Qadisiyah, Iraq
[2] Tikrit Univ, Dept Petr Proc Engn, POB 42, Al Qadisiyah, Iraq
[3] Univ Cent Florida, Dept Elect & Comp Engn, Orlando, FL 32816 USA
[4] Wayne State Univ, Dept Elect & Comp Engn, Detroit, MI 48202 USA
关键词
deep learning; network intrusion detection system (NIDS); deep fool; poisoning attacks; pearson correlation method; CICIDS2019; NETWORK;
D O I
10.3390/a17040155
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid expansion and pervasive reach of the internet in recent years have raised concerns about evolving and adaptable online threats, particularly with the extensive integration of Machine Learning (ML) systems into our daily routines. These systems are increasingly becoming targets of malicious attacks that seek to distort their functionality through the concept of poisoning. Such attacks aim to warp the intended operations of these services, deviating them from their true purpose. Poisoning renders systems susceptible to unauthorized access, enabling illicit users to masquerade as legitimate ones, compromising the integrity of smart technology-based systems like Network Intrusion Detection Systems (NIDSs). Therefore, it is necessary to continue working on studying the resilience of deep learning network systems while there are poisoning attacks, specifically interfering with the integrity of data conveyed over networks. This paper explores the resilience of deep learning (DL)-based NIDSs against untethered white-box attacks. More specifically, it introduces a designed poisoning attack technique geared especially for deep learning by adding various amounts of altered instances into training datasets at diverse rates and then investigating the attack's influence on model performance. We observe that increasing injection rates (from 1% to 50%) and random amplified distribution have slightly affected the overall performance of the system, which is represented by accuracy (0.93) at the end of the experiments. However, the rest of the results related to the other measures, such as PPV (0.082), FPR (0.29), and MSE (0.67), indicate that the data manipulation poisoning attacks impact the deep learning model. These findings shed light on the vulnerability of DL-based NIDS under poisoning attacks, emphasizing the significance of securing such systems against these sophisticated threats, for which defense techniques should be considered. Our analysis, supported by experimental results, shows that the generated poisoned data have significantly impacted the model performance and are hard to be detected.
引用
收藏
页数:19
相关论文
共 65 条
[51]  
Hidayat I, 2022, Journal of Computational and Cognitive Engineering, V2, P88, DOI [10.47852/bonviewjcce2202270, 10.32699/jamasy.v2i6.3993, DOI 10.47852/BONVIEWJCCE2202270, 10.47852/BONVIEWJCCE2202270]
[52]  
Sandelin F., 2019, Masters Thesis
[53]  
Sharafaldin I, 2019, INT CARN CONF SECU, DOI 10.1109/ccst.2019.8888419
[54]  
Tan Z, The Defence of 2D Poisoning Attack
[55]  
Tang TA, 2018, 2018 4TH IEEE CONFERENCE ON NETWORK SOFTWARIZATION AND WORKSHOPS (NETSOFT), P202, DOI 10.1109/NETSOFT.2018.8460090
[56]   Network Intrusion Detection Based on Novel Feature Selection Model and Various Recurrent Neural Networks [J].
Thi-Thu-Huong Le ;
Kim, Yongsu ;
Kim, Howon .
APPLIED SCIENCES-BASEL, 2019, 9 (07)
[57]   A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning [J].
Tian, Zhiyi ;
Cui, Lei ;
Liang, Jie ;
Yu, Shui .
ACM COMPUTING SURVEYS, 2023, 55 (08)
[58]  
Wang Peng, 2019, 2019 International Conference on Communications, Information System and Computer Engineering (CISCE). Proceedings, P431, DOI 10.1109/CISCE.2019.00102
[59]  
Waskle Subhash, 2020, 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC). Proceedings, P803, DOI 10.1109/ICESC48915.2020.9155656
[60]  
Williams J.M., 2017, Deep learning and transfer learning in the classification of eeg signals