Generating practical adversarial examples against learning-based network intrusion detection systems

被引:2
|
作者
Kumar, Vivek [1 ,2 ]
Kumar, Kamal [3 ]
Singh, Maheep [1 ]
机构
[1] Natl Inst Technol, Dept Comp Sci & Engn, Srinagar 246174, Uttarakhand, India
[2] THDC Inst Hydropower Engn & Technol, Dept Comp Sci & Engn, Tehri 249124, Uttarakhand, India
[3] IGDTUW, Dept Informat Technol, Delhi 110006, India
关键词
Adversarial example; Domain constraints; Deep learning; Machine learning; Variational autoencoder; ATTACKS;
D O I
10.1007/s12243-024-01021-9
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
There has been a significant development in the design of intrusion detection systems (IDS) by using deep learning (DL)/machine learning (ML) methods for detecting threats in a computer network. Unfortunately, these DL/ML-based IDS are vulnerable to adversarial examples, wherein a malicious data sample can be slightly perturbed to cause a misclassification by an IDS while retaining its malicious properties. Unlike image recognition domain, the network domain has certain constraints known as domain constraints which are multifarious interrelationships and dependencies between features. To be considered as practical and realizable, an adversary must ensure that the adversarial examples comply with domain constraints. Recently, generative models like GANs and VAEs have been extensively used for generating adversarial examples against IDS. However, majority of these techniques generate adversarial examples which do not satisfy all domain constraints. Also, current generative methods lack explicit restrictions on the amount of perturbation which a malicious data sample undergoes during the crafting of adversarial examples, leading to the potential generation of invalid data samples. To address these limitations, a solution is presented in this work which utilize a variational autoencoder to generate adversarial examples that not only result in misclassification by an IDS, but also satisfy domain constraints. Instead of perturbing the data samples itself, the adversarial examples are crafted by perturbing the latent space representation of the data sample. It allows the generation of adversarial examples under limited perturbation. This research has explored the novel applications of generative networks for generating constraint satisfying adversarial examples. The experimental results support the claims with an attack success rate of 64.8%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} against ML/DL-based IDS. The trained model can be integrated further into an operational IDS to strengthen its robustness against adversarial examples; however, this is out of scope of this work.
引用
收藏
页码:209 / 226
页数:18
相关论文
共 50 条
  • [41] Detecting Adversarial Examples for Network Intrusion Detection System with GAN
    Peng, Ye
    Fu, Guobin
    Luo, Yingguang
    Hu, Jia
    Li, Bin
    Yan, Qifei
    PROCEEDINGS OF 2020 IEEE 11TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND SERVICE SCIENCE (ICSESS 2020), 2020, : 6 - 10
  • [42] A Framework for Generating Evasion Attacks for Machine Learning Based Network Intrusion Detection Systems
    Mogg, Raymond
    Enoch, Simon Yusuf
    Kim, Dong Seong
    INFORMATION SECURITY APPLICATIONS, 2021, 13009 : 51 - 63
  • [43] Adversarial Machine Learning for Network Intrusion Detection Systems: A Comprehensive Survey
    He, Ke
    Kim, Dan Dongseong
    Asghar, Muhammad Rizwan
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2023, 25 (01): : 538 - 566
  • [44] Machine Learning-Based Systems for Intrusion Detection in VANETs
    Idris, Hala Eldaw
    Hosni, Ines
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 3, INTELLISYS 2024, 2024, 1067 : 603 - 614
  • [45] On Credibility of Adversarial Examples Against Learning-Based Grid Voltage Stability Assessment
    Song, Qun
    Tan, Rui
    Ren, Chao
    Xu, Yan
    Lou, Yang
    Wang, Jianping
    Gooi, Hoay Beng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (02) : 585 - 599
  • [46] Defending Against Deep Learning-Based Traffic Fingerprinting Attacks With Adversarial Examples
    Hayden, Blake
    Walsh, Timothy
    Barton, Armon
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2025, 28 (01)
  • [47] Amplification methods to promote the attacks against machine learning-based intrusion detection systems
    Sicong Zhang
    Yang Xu
    Xinyu Zhang
    Xiaoyao Xie
    Applied Intelligence, 2024, 54 : 2941 - 2961
  • [48] Amplification methods to promote the attacks against machine learning-based intrusion detection systems
    Zhang, Sicong
    Xu, Yang
    Zhang, Xinyu
    Xie, Xiaoyao
    APPLIED INTELLIGENCE, 2024, 54 (04) : 2941 - 2961
  • [49] Evaluating and Improving Adversarial Robustness of Machine Learning-Based Network Intrusion Detectors
    Han, Dongqi
    Wang, Zhiliang
    Zhong, Ying
    Chen, Wenqi
    Yang, Jiahai
    Lu, Shuqiang
    Shi, Xingang
    Yin, Xia
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (08) : 2632 - 2647
  • [50] Generative Adversarial Attacks Against Intrusion Detection Systems Using Active Learning
    Shu, Dule
    Leslie, Nandi O.
    Kamhoua, Charles A.
    Tucker, Conrad S.
    PROCEEDINGS OF THE 2ND ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING, WISEML 2020, 2020, : 1 - 6