TAD: Transfer learning-based multi-adversarial detection of evasion attacks against network intrusion detection systems

被引:38
作者
Debicha, Islam [1 ,2 ]
Bauwens, Richard [1 ]
Debatty, Thibault [2 ]
Dricot, Jean -Michel [1 ]
Kenaza, Tayeb [3 ]
Mees, Wim [2 ]
机构
[1] Univ Libre Bruxelles, Cybersecur Res Ctr, B-1050 Brussels, Belgium
[2] Royal Mil Acad, Cyber Def Lab, B-1000 Brussels, Belgium
[3] Ecole Mil Polytech, Comp Secur Lab, Algiers, Algeria
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2023年 / 138卷
关键词
Intrusion detection system; Machine learning; Evasion attacks; Adversarial detection; Transfer learning; Data fusion; ROBUSTNESS;
D O I
10.1016/j.future.2022.08.011
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Nowadays, intrusion detection systems based on deep learning deliver state-of-the-art performance. However, recent research has shown that specially crafted perturbations, called adversarial examples, are capable of significantly reducing the performance of these intrusion detection systems. The objective of this paper is to design an efficient transfer learning-based adversarial detector and then to assess the effectiveness of using multiple strategically placed adversarial detectors compared to a single adversarial detector for intrusion detection systems. In our experiments, we implement existing state-of-the-art models for intrusion detection. We then attack those models with a set of chosen evasion attacks. In an attempt to detect those adversarial attacks, we design and implement multiple transfer learning-based adversarial detectors, each receiving a subset of the information passed through the IDS. By combining their respective decisions, we illustrate that combining multiple detectors can further improve the detectability of adversarial traffic compared to a single detector in the case of a parallel IDS design. (C) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:185 / 197
页数:13
相关论文
共 42 条
[1]  
[Anonymous], Technical report
[2]   Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems [J].
Apruzzese, Giovanni ;
Andreolini, Mauro ;
Ferretti, Luca ;
Marchetti, Mirco ;
Colajanni, Michele .
DIGITAL THREATS: RESEARCH AND PRACTICE, 2022, 3 (03)
[3]   Deep Reinforcement Adversarial Learning Against Botnet Evasion Attacks [J].
Apruzzese, Giovanni ;
Andreolini, Mauro ;
Marchetti, Mirco ;
Venturi, Andrea ;
Colajanni, Michele .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2020, 17 (04) :1975-1987
[4]   Evaluating the effectiveness of Adversarial Attacks against Botnet Detectors [J].
Apruzzese, Giovanni ;
Colajanni, Michele ;
Marchetti, Mirco .
2019 IEEE 18TH INTERNATIONAL SYMPOSIUM ON NETWORK COMPUTING AND APPLICATIONS (NCA), 2019, :193-200
[5]   Hardening Random Forest Cyber Detectors Against Adversarial Attacks [J].
Apruzzese, Giovanni ;
Andreolini, Mauro ;
Colajanni, Michele ;
Marchetti, Mirco .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2020, 4 (04) :427-439
[6]   AppCon: Mitigating Evasion Attacks to ML Cyber Detectors [J].
Apruzzese, Giovanni ;
Andreolini, Mauro ;
Marchetti, Mirco ;
Colacino, Vincenzo Giuseppe ;
Russo, Giacomo .
SYMMETRY-BASEL, 2020, 12 (04)
[7]  
Apruzzese G, 2018, INT CONF CYBER CONFL, P371, DOI 10.23919/CYCON.2018.8405026
[8]  
Carlini N., 2017, ARXIV170507263
[9]  
Carlini N, 2016, Arxiv, DOI arXiv:1607.04311
[10]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57