Generating practical adversarial examples against learning-based network intrusion detection systems

被引:2
|
作者
Kumar, Vivek [1 ,2 ]
Kumar, Kamal [3 ]
Singh, Maheep [1 ]
机构
[1] Natl Inst Technol, Dept Comp Sci & Engn, Srinagar 246174, Uttarakhand, India
[2] THDC Inst Hydropower Engn & Technol, Dept Comp Sci & Engn, Tehri 249124, Uttarakhand, India
[3] IGDTUW, Dept Informat Technol, Delhi 110006, India
关键词
Adversarial example; Domain constraints; Deep learning; Machine learning; Variational autoencoder; ATTACKS;
D O I
10.1007/s12243-024-01021-9
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
There has been a significant development in the design of intrusion detection systems (IDS) by using deep learning (DL)/machine learning (ML) methods for detecting threats in a computer network. Unfortunately, these DL/ML-based IDS are vulnerable to adversarial examples, wherein a malicious data sample can be slightly perturbed to cause a misclassification by an IDS while retaining its malicious properties. Unlike image recognition domain, the network domain has certain constraints known as domain constraints which are multifarious interrelationships and dependencies between features. To be considered as practical and realizable, an adversary must ensure that the adversarial examples comply with domain constraints. Recently, generative models like GANs and VAEs have been extensively used for generating adversarial examples against IDS. However, majority of these techniques generate adversarial examples which do not satisfy all domain constraints. Also, current generative methods lack explicit restrictions on the amount of perturbation which a malicious data sample undergoes during the crafting of adversarial examples, leading to the potential generation of invalid data samples. To address these limitations, a solution is presented in this work which utilize a variational autoencoder to generate adversarial examples that not only result in misclassification by an IDS, but also satisfy domain constraints. Instead of perturbing the data samples itself, the adversarial examples are crafted by perturbing the latent space representation of the data sample. It allows the generation of adversarial examples under limited perturbation. This research has explored the novel applications of generative networks for generating constraint satisfying adversarial examples. The experimental results support the claims with an attack success rate of 64.8%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} against ML/DL-based IDS. The trained model can be integrated further into an operational IDS to strengthen its robustness against adversarial examples; however, this is out of scope of this work.
引用
收藏
页码:209 / 226
页数:18
相关论文
共 50 条
  • [1] Adversarial Examples Against the Deep Learning Based Network Intrusion Detection Systems
    Yang, Kaichen
    Liu, Jianqing
    Zhang, Chi
    Fang, Yuguang
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 559 - 564
  • [2] Generating Adversarial Examples Against Machine Learning-Based Intrusion Detector in Industrial Control Systems
    Chen, Jiming
    Gao, Xiangshan
    Deng, Ruilong
    He, Yang
    Fang, Chongrong
    Cheng, Peng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (03) : 1810 - 1825
  • [3] Adversarial Attacks Against Deep Learning-Based Network Intrusion Detection Systems and Defense Mechanisms
    Zhang, Chaoyun
    Costa-Perez, Xavier
    Patras, Paul
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2022, 30 (03) : 1294 - 1311
  • [4] Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
    Haroon, Muhammad Shahzad
    Ali, Husnain Mansoor
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02): : 3513 - 3527
  • [5] Enhancing Robustness Against Adversarial Examples in Network Intrusion Detection Systems
    Hashemi, Mohammad J.
    Keller, Eric
    2020 IEEE CONFERENCE ON NETWORK FUNCTION VIRTUALIZATION AND SOFTWARE DEFINED NETWORKS (NFV-SDN), 2020, : 37 - 43
  • [6] Adversarial examples for network intrusion detection systems
    Sheatsley, Ryan
    Papernot, Nicolas
    Weisman, Michael J.
    Verma, Gunjan
    McDaniel, Patrick
    JOURNAL OF COMPUTER SECURITY, 2022, 30 (05) : 727 - 752
  • [7] TAD: Transfer learning-based multi-adversarial detection of evasion attacks against network intrusion detection systems
    Debicha, Islam
    Bauwens, Richard
    Debatty, Thibault
    Dricot, Jean -Michel
    Kenaza, Tayeb
    Mees, Wim
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 138 : 185 - 197
  • [8] MTMG: A Framework for Generating Adversarial Examples Targeting Multiple Learning-Based Malware Detection Systems
    Jiang, Zihan (jiangzihan0512@gmail.com), 1600, Springer Science and Business Media Deutschland GmbH (14325 LNAI):
  • [9] Adversarial attacks against supervised machine learning based network intrusion detection systems
    Alshahrani, Ebtihaj
    Alghazzawi, Daniyal
    Alotaibi, Reem
    Rabie, Osama
    PLOS ONE, 2022, 17 (10):
  • [10] A gradient-based approach for adversarial attack on deep learning-based network intrusion detection systems
    Mohammadian, Hesamodin
    Ghorbani, Ali A.
    Lashkari, Arash Habibi
    APPLIED SOFT COMPUTING, 2023, 137