Impacting Robustness in Deep Learning-Based NIDS through Poisoning Attacks

被引:0
作者
Alahmed, Shahad [1 ]
Alasad, Qutaiba [2 ]
Yuan, Jiann-Shiun [3 ]
Alawad, Mohammed [4 ]
机构
[1] Tikrit Univ, Dept Comp Sci, POB 42, Al Qadisiyah, Iraq
[2] Tikrit Univ, Dept Petr Proc Engn, POB 42, Al Qadisiyah, Iraq
[3] Univ Cent Florida, Dept Elect & Comp Engn, Orlando, FL 32816 USA
[4] Wayne State Univ, Dept Elect & Comp Engn, Detroit, MI 48202 USA
关键词
deep learning; network intrusion detection system (NIDS); deep fool; poisoning attacks; pearson correlation method; CICIDS2019; NETWORK;
D O I
10.3390/a17040155
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid expansion and pervasive reach of the internet in recent years have raised concerns about evolving and adaptable online threats, particularly with the extensive integration of Machine Learning (ML) systems into our daily routines. These systems are increasingly becoming targets of malicious attacks that seek to distort their functionality through the concept of poisoning. Such attacks aim to warp the intended operations of these services, deviating them from their true purpose. Poisoning renders systems susceptible to unauthorized access, enabling illicit users to masquerade as legitimate ones, compromising the integrity of smart technology-based systems like Network Intrusion Detection Systems (NIDSs). Therefore, it is necessary to continue working on studying the resilience of deep learning network systems while there are poisoning attacks, specifically interfering with the integrity of data conveyed over networks. This paper explores the resilience of deep learning (DL)-based NIDSs against untethered white-box attacks. More specifically, it introduces a designed poisoning attack technique geared especially for deep learning by adding various amounts of altered instances into training datasets at diverse rates and then investigating the attack's influence on model performance. We observe that increasing injection rates (from 1% to 50%) and random amplified distribution have slightly affected the overall performance of the system, which is represented by accuracy (0.93) at the end of the experiments. However, the rest of the results related to the other measures, such as PPV (0.082), FPR (0.29), and MSE (0.67), indicate that the data manipulation poisoning attacks impact the deep learning model. These findings shed light on the vulnerability of DL-based NIDS under poisoning attacks, emphasizing the significance of securing such systems against these sophisticated threats, for which defense techniques should be considered. Our analysis, supported by experimental results, shows that the generated poisoned data have significantly impacted the model performance and are hard to be detected.
引用
收藏
页数:19
相关论文
共 65 条
  • [1] Supervised Machine Learning Techniques for Efficient Network Intrusion Detection
    Aboueata, Nada
    Alrasbi, Sara
    Erbad, Aiman
    Kassler, Andreas
    Bhamare, Deval
    [J]. 2019 28TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND NETWORKS (ICCCN), 2019,
  • [2] Ahmed I. M., 2021, PROC INT C ADV CYBER, P586
  • [3] Adversarial Deep Learning for Robust Detection of Binary Encoded Malware
    Al-Dujaili, Abdullah
    Huang, Alex
    Hemberg, Erik
    O'reilly, Una-May
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 76 - 82
  • [4] Mitigation of Black-Box Attacks on Intrusion Detection Systems-Based ML
    Alahmed, Shahad
    Alasad, Qutaiba
    Hammood, Maytham M.
    Yuan, Jiann-Shiun
    Alawad, Mohammed
    [J]. COMPUTERS, 2022, 11 (07)
  • [5] Alasad Q., 2022, P 2 INT C EM TECHN I, P533
  • [6] Alatwi HA, 2021, Arxiv, DOI arXiv:2112.03315
  • [7] Toward Efficient Intrusion Detection System Using Hybrid Deep Learning Approach
    Aldallal, Ammar
    [J]. SYMMETRY-BASEL, 2022, 14 (09):
  • [8] Amanuel S.V. A., 2021, Journal of Information Technology and Informatics, V1, P26
  • [9] Semi-Natural and Spontaneous Speech Recognition Using Deep Neural Networks with Hybrid Features Unification
    Amjad, Ammar
    Khan, Lal
    Chang, Hsien-Tsung
    [J]. PROCESSES, 2021, 9 (12)
  • [10] Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems
    Apruzzese, Giovanni
    Andreolini, Mauro
    Ferretti, Luca
    Marchetti, Mirco
    Colajanni, Michele
    [J]. DIGITAL THREATS: RESEARCH AND PRACTICE, 2022, 3 (03):