Constraining Adversarial Attacks on Network Intrusion Detection Systems: Transferability and Defense Analysis

被引:2
作者
Alhussien, Nour [1 ]
Aleroud, Ahmed [1 ]
Melhem, Abdullah [1 ]
Khamaiseh, Samer Y. [2 ]
机构
[1] Augusta Univ, Sch Comp & Cyber Sci, Augusta, GA 30912 USA
[2] Miami Univ, Comp Sci & Software Engn Dept, Oxford, OH 45056 USA
来源
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT | 2024年 / 21卷 / 03期
基金
美国国家科学基金会;
关键词
Training; Data models; Telecommunication traffic; Analytical models; Robustness; Perturbation methods; Glass box; Adversarial attacks; network intrusion detection systems; artificial intelligence; neural networks; computer networks;
D O I
10.1109/TNSM.2024.3357316
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial attacks have been extensively studied in the domain of deep image classification, but their impacts on other domains such as Machine and Deep Learning-based Network Intrusion Detection Systems (NIDSs) have received limited attention. While adversarial attacks on images are generally more straightforward due to fewer constraints in the input domain, generating adversarial examples in the network domain poses greater challenges due to the diverse types of network traffic and the need to maintain its validity. Prior research has introduced constraints to generate adversarial examples against NIDSs, but their effectiveness across different attack settings, including transferability, targetability, defenses, and the overall attack success have not been thoroughly examined. In this paper, we proposed a novel set of domain constraints for network traffic that preserve the statistical and semantic relationships between traffic features while ensuring the validity of the perturbed adversarial traffic. Our constraints are categorized into four types: feature mutability constraints, feature value constraints, feature dependency constraints and distribution preserving constraints. We evaluated the impacts of these constraints on white box and black box attacks using two intrusion detection datasets. Our results demonstrated that the introduced constraints have a significant impact on the success of white box attacks. Our research revealed that transferability of adversarial examples depends on the similarity between the targeted models and the models to which the examples are transferred, regardless of the attack type or the presence of constraints. We also observed that adversarial training enhanced the robustness of the majority of machine learning and deep learning-based NIDSs against unconstrained attacks, while providing some resilience against constrained attacks. In practice, this suggests the potential use of pre-existing signatures of constrained attacks to combat new variations or zero-day adversarial attacks in real-world NIDSs.
引用
收藏
页码:2751 / 2772
页数:22
相关论文
共 56 条
  • [1] Alatwi HA, 2021, 2021 IEEE WORLD AI IOT CONGRESS (AIIOT), P34, DOI [10.1109/AIIoT52608.2021.9454214, 10.1109/AIIOT52608.2021.9454214]
  • [2] Alemany N., 2022, P SAFEAI AAAI, P1
  • [3] Contextual information fusion for intrusion detection: a survey and taxonomy
    Aleroud, Ahmed
    Karabatis, George
    [J]. KNOWLEDGE AND INFORMATION SYSTEMS, 2017, 52 (03) : 563 - 619
  • [4] Feature Selection Using Information Gain for Improved Structural-Based Alert Correlation
    Alhaj, Taqwa Ahmed
    Siraj, Maheyzah Md
    Zainal, Anazida
    Elshoush, Huwaida Tagelsir
    Elhaj, Fatin
    [J]. PLOS ONE, 2016, 11 (11):
  • [5] Adversarial machine learning in Network Intrusion Detection Systems
    Alhajjar, Elie
    Maxwell, Paul
    Bastian, Nathaniel
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 186
  • [6] Alhussien N., 2023, P IEEE IFIP NETW OP, P1
  • [7] Triggerability of Backdoor Attacks in Multi-Source Transfer Learning-based Intrusion Detection
    Alhussien, Nour
    Aleroud, Ahmed
    Rahaeimehr, Reza
    Schwarzmann, Alexander
    [J]. 2022 IEEE/ACM INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING, APPLICATIONS AND TECHNOLOGIES, BDCAT, 2022, : 40 - 47
  • [8] Bai T, 2021, Arxiv, DOI [arXiv:2102.01356, 10.48550/arXiv.2102.01356, DOI 10.48550/ARXIV.2102.01356]
  • [9] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [10] Chen P.-Y., 2017, P 10 ACM WORKSH ART, P15