Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems

被引:59
作者
Apruzzese, Giovanni [1 ]
Andreolini, Mauro [2 ]
Ferretti, Luca [2 ]
Marchetti, Mirco [3 ]
Colajanni, Michele [4 ]
机构
[1] Univ Liechtenstein, Inst Informat Syst, Vaduz, Liechtenstein
[2] Univ Modena & Reggio Emilia, Dept Phys Informat & Math, Modena, Italy
[3] Univ Modena & Reggio Emilia, Dept Engn Enzo Ferrari, Modena, Italy
[4] Univ Bologna, Dept Informat Sci & Engn, Bologna, Italy
来源
DIGITAL THREATS: RESEARCH AND PRACTICE | 2022年 / 3卷 / 03期
关键词
Cybersecurity; network intrusion detection; adversarial attacks; evasion; NIDS; CLASSIFIERS; TAXONOMY;
D O I
10.1145/3469659
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The incremental diffusion of machine learning algorithms in supporting cybersecurity is creating novel defensive opportunities but also new types of risks. Multiple researches have shown that machine learning methods are vulnerable to adversarial attacks that create tiny perturbations aimed at decreasing the effectiveness of detecting threats. We observe that existing literature assumes threat models that are inappropriate for realistic cybersecurity scenarios, because they consider opponents with complete knowledge about the cyber detector or that can freely interact with the target systems. By focusing on Network Intrusion Detection Systems based on machine learning, we identify and model the real capabilities and circumstances required by attackers to carry out feasible and successful adversarial attacks. We then apply our model to several adversarial attacks proposed in literature and highlight the limits and merits that can result in actual adversarial attacks. The contributions of this article can help hardening defensive systems by letting cyber defenders address the most critical and real issues and can benefit researchers by allowing them to devise novel forms of adversarial attacks based on realistic threat models.
引用
收藏
页数:19
相关论文
共 108 条
[41]  
Ilyas A, 2019, ADV NEUR IN, V32
[42]  
Ilyas Andrew, 2018, P MACHINE LEARNING R, V80
[43]   Feature Space Perturbations Yield More Transferable Adversarial Examples [J].
Inkawhich, Nathan ;
Wen, Wei ;
Li, Hai ;
Chen, Yiran .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7059-7067
[44]  
Janusz A, 2019, IEEE INT CONF BIG DA, P5881, DOI [10.1109/bigdata47090.2019.9005668, 10.1109/BigData47090.2019.9005668]
[45]   SPIFFY: Inducing Cost-Detectability Tradeoffs for Persistent Link-Flooding Attacks [J].
Kang, Min Suk ;
Gligor, Virgil D. ;
Sekar, Vyas .
23RD ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2016), 2016,
[46]  
Kantchelian A, 2016, PR MACH LEARN RES, V48
[47]  
Khalil K, 2016, IEEE GLOB COMM CONF
[48]  
Kloft M., 2010, JMLR WORKSHOP C P, P405
[49]   Towards the development of realistic botnet dataset in the Internet of Things for network forensic analytics: Bot-IoT dataset [J].
Koroniotis, Nickolaos ;
Moustafa, Nour ;
Sitnikova, Elena ;
Turnbull, Benjamin .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2019, 100 :779-796
[50]   Doers, Not Watchers: Intelligent Autonomous Agents Are a Path to Cyber Resilience [J].
Kott, Alexander ;
Theron, Paul .
IEEE SECURITY & PRIVACY, 2020, 18 (03) :62-66