Hardening machine learning denial of service (DoS) defences against adversarial attacks in IoT smart home networks

被引:55
作者
Anthi, Eirini [1 ]
Williams, Lowri [1 ]
Laved, Amir [1 ]
Burnap, Pete [1 ]
机构
[1] Cardiff Univ, Sch Comp Sci Informat, Cardiff, Wales
基金
英国工程与自然科学研究理事会;
关键词
Internet of things (IoT); Smart homes; Networking; Supervised machine learning; Adversarial machine learning; Attack detection; Intrusion detection systems; INTERNET; THINGS;
D O I
10.1016/j.cose.2021.102352
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning based Intrusion Detection Systems (IDS) allow flexible and efficient automated detection of cyberattacks in Internet of Things (IoT) networks. However, this has also created an additional attack vector; the machine learning models which support the IDS's decisions may also be subject to cyberattacks known as Adversarial Machine Learning (AML). In the context of IoT, AML can be used to manipulate data and network traffic that traverse through such devices. These perturbations increase the confusion in the decision boundaries of the machine learning classifier, where malicious network packets are often miss-classified as being benign. Consequently, such errors are bypassed by machine learning based detectors, which increases the potential of significantly delaying attack detection and further consequences such as personal information leakage, damaged hardware, and financial loss. Given the impact that these attacks may have, this paper proposes a rule-based approach towards generating AML attack samples and explores how they can be used to target a range of supervised machine learning classifiers used for detecting Denial of Service attacks in an IoT smart home network. The analysis explores which DoS packet features to perturb and how such adversarial samples can support increasing the robustness of supervised models using adversarial training. The results demonstrated that the performance of all the top performing classifiers were affected, decreasing a maximum of 47.2 percentage points when adversarial samples were present. Their performances improved following adversarial training, demonstrating their robustness towards such attacks. Crown Copyright (c) 2021 Published by Elsevier Ltd. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )
引用
收藏
页数:12
相关论文
共 44 条
[21]  
Kirda RLE, 2009, LECT NOTES COMPUT, V5758
[22]   N-BaIoT-Network-Based Detection of IoT Botnet Attacks Using Deep Autoencoders [J].
Meidan, Yair ;
Bohadana, Michael ;
Mathov, Yael ;
Mirsky, Yisroel ;
Shabtai, Asaf ;
Breitenbacher, Dominik ;
Elovici, Yuval .
IEEE PERVASIVE COMPUTING, 2018, 17 (03) :12-22
[23]  
Nelson B., 2008, LEET, V8, P9
[24]  
Notra S, 2014, IEEE CONF COMM NETW, P79, DOI 10.1109/CNS.2014.6997469
[25]   Blinded and Confused: Uncovering Systemic Flaws in Device Telemetry for Smart-Home Internet of Things [J].
OConnor, T. J. ;
Enck, William ;
Reaves, Bradley .
PROCEEDINGS OF THE 2019 CONFERENCE ON SECURITY AND PRIVACY IN WIRELESS AND MOBILE NETWORKS (WISEC '19), 2019, :140-150
[26]   Feature Selection for Classification using Principal Component Analysis and Information Gain [J].
Omuya, Erick Odhiambo ;
Okeyo, George Onyango ;
Kimwele, Michael Waema .
EXPERT SYSTEMS WITH APPLICATIONS, 2021, 174
[27]   The Limitations of Deep Learning in Adversarial Settings [J].
Papernot, Nicolas ;
McDaniel, Patrick ;
Jha, Somesh ;
Fredrikson, Matt ;
Celik, Z. Berkay ;
Swami, Ananthram .
1ST IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, 2016, :372-387
[28]  
Petrovski A. V., 2018, 2018 Int'l Joint Conf. on Neural Networks, P1, DOI [DOI 10.1109/IJCNN.2018.8489489, 10.1109/IJCNN.2018.8489489]
[29]  
Refaeilzadeh Payam, 2009, ENCY DATABASE SYSTEM, V5, P532, DOI [10.1007/978-0-387-39940-9_565, DOI 10.1007/978-0-387-39940-9565]
[30]  
Rigaki Maria, 2017, Master's thesis