Securing Network Traffic Classification Models against Adversarial Examples Using Derived Variables

被引:2
作者
Adeke, James Msughter [1 ,2 ]
Liu, Guangjie [1 ,2 ]
Zhao, Junjie [1 ,2 ]
Wu, Nannan [3 ]
Bashir, Hafsat Muhammad [1 ]
Davoli, Franco
机构
[1] Nanjing Univ Informat Sci Technol, Sch Elect & Informat Engn, Nanjing 210044, Peoples R China
[2] Minist Educ, Key Lab Intelligent Support Technol Complex Enviro, Nanjing 210044, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Nanjing 210044, Peoples R China
关键词
machine learning; adversarial attack; network traffic classification; derived variables; robustness; INTRUSION DETECTION;
D O I
10.3390/fi15120405
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) models are essential to securing communication networks. However, these models are vulnerable to adversarial examples (AEs), in which malicious inputs are modified by adversaries to produce the desired output. Adversarial training is an effective defense method against such attacks but relies on access to a substantial number of AEs, a prerequisite that entails significant computational resources and the inherent limitation of poor performance on clean data. To address these problems, this study proposes a novel approach to improve the robustness of ML-based network traffic classification models by integrating derived variables (DVars) into training. Unlike adversarial training, our approach focuses on enhancing training using DVars, introducing randomness into the input data. DVars are generated from the baseline dataset and significantly improve the resilience of the model to AEs. To evaluate the effectiveness of DVars, experiments were conducted using the CSE-CIC-IDS2018 dataset and three state-of-the-art ML-based models: decision tree (DT), random forest (RF), and k-neighbors (KNN). The results show that DVars can improve the accuracy of KNN under attack from 0.45% to 0.84% for low-intensity attacks and from 0.32% to 0.66% for high-intensity attacks. Furthermore, both DT and RF achieve a significant increase in accuracy when subjected to attack of different intensity. Moreover, DVars are computationally efficient, scalable, and do not require access to AEs.
引用
收藏
页数:21
相关论文
共 60 条
  • [1] Abou Khamis R., 2020, P 2020 INT S NETW CO, P1
  • [2] Investigating Adversarial Attacks against Network Intrusion Detection Systems in SDNs
    Aiken, James
    Scott-Hayward, Sandra
    [J]. 2019 IEEE CONFERENCE ON NETWORK FUNCTION VIRTUALIZATION AND SOFTWARE DEFINED NETWORKS (IEEE NFV-SDN), 2019,
  • [3] Adversarial machine learning in Network Intrusion Detection Systems
    Alhajjar, Elie
    Maxwell, Paul
    Bastian, Nathaniel
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 186
  • [4] [Anonymous], 2018, CSE-CIC-IDS2018 on AWS
  • [5] Hardening machine learning denial of service (DoS) defences against adversarial attacks in IoT smart home networks
    Anthi, Eirini
    Williams, Lowri
    Laved, Amir
    Burnap, Pete
    [J]. COMPUTERS & SECURITY, 2021, 108
  • [6] Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems
    Apruzzese, Giovanni
    Andreolini, Mauro
    Ferretti, Luca
    Marchetti, Mirco
    Colajanni, Michele
    [J]. DIGITAL THREATS: RESEARCH AND PRACTICE, 2022, 3 (03):
  • [7] Evaluating the effectiveness of Adversarial Attacks against Botnet Detectors
    Apruzzese, Giovanni
    Colajanni, Michele
    Marchetti, Mirco
    [J]. 2019 IEEE 18TH INTERNATIONAL SYMPOSIUM ON NETWORK COMPUTING AND APPLICATIONS (NCA), 2019, : 193 - 200
  • [8] Hardening Random Forest Cyber Detectors Against Adversarial Attacks
    Apruzzese, Giovanni
    Andreolini, Mauro
    Colajanni, Michele
    Marchetti, Mirco
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2020, 4 (04): : 427 - 439
  • [9] Detecting botnet by using particle swarm optimization algorithm based on voting system
    Asadi, Mehdi
    Jamali, Mohammad Ali Jabraeil
    Parsa, Saeed
    Majidnezhad, Vahid
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 107 (107): : 95 - 111
  • [10] Bai T, 2021, Arxiv, DOI [arXiv:2102.01356, DOI 10.48550/ARXIV.2102.01356]