Vulnerability of Deep Learning Model based Anomaly Detection in Vehicle Network

被引:0
作者
Wang, Yi [1 ]
Chia, Dan Wei Ming [2 ]
Ha, Yajun [3 ]
机构
[1] Continental Automot Singapore, Secur & Privacy Competence Ctr, Singapore, Singapore
[2] Singapore Inst Technol, Infocomm Technol, Singapore, Singapore
[3] ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai, Peoples R China
来源
2020 IEEE 63RD INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS) | 2020年
关键词
Anomaly Detection; Deep Learning; LSTM;
D O I
10.1109/mwscas48704.2020.9184472
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Artificial Intelligence (AI) has been widely applied in Anomaly Detection System (ADS) in-vehicle networks. ADS should be able to detect abnormal behaviors and attacks at the gateway Electronic Control Unit(ECU). The detection is usually required with low latency to leave as much as possible time budget to apply proper protections once an anomaly is successfully detected. However, AI models are very vulnerable to the type of Blackbox attacks, which do not require prior knowledge of either the deep learning model internals or the model's training data for a hacker. In this paper, first, we propose a new optimized method to adopt Long Short Term Memory (LSTM) deep learning model for the ADS in-vehicle network, which leads to an efficient detection system. The optimization has been done based on the characteristics of the dataset from practical CAN in-vehicle network together with tuning existing parameters of the LSTM model. Second, we propose an efficient Blackbox attack to the adopted ADS using the LSTM model, which only requires a small test dataset to train a new victim model (input/output is compatible with the original model). Experimental results show that we only require around 50 man-hours to build a victim model that leads to the wrong interpretation compared to the original model without the Blackbox attack. It proves that the whole community should not only focus on developing efficient ADS, but also on how to protect it in future work(1).
引用
收藏
页码:293 / 296
页数:4
相关论文
共 9 条
[1]  
Anderson Mark., 2017, Crafting Adversarial Attacks on Recurrent Neural Networks. arXiv
[2]  
Bosch, 1991, CAN SPEC VERS 2 0
[3]  
Chockalingam V., 2017, DETECTING ATTACKS CA
[4]  
Marchetti M, 2017, IEEE INT VEH SYM, P1577, DOI 10.1109/IVS.2017.7995934
[5]  
Narayanan S. N., 2015, USING DATA ANALYTICS
[6]  
Papernot N, 2016, IEEE MILIT COMMUN C, P49, DOI 10.1109/MILCOM.2016.7795300
[7]   Anomaly Detection in Automobile Control Network Data with Long Short-Term Memory Networks [J].
Taylor, Adrian ;
Leblanc, Sylvain ;
Japkowicz, Nathalie .
PROCEEDINGS OF 3RD IEEE/ACM INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS, (DSAA 2016), 2016, :130-139
[8]  
Theissler A., 2014, 1 INT WORKSH BIG DAT
[9]  
Wolf Marko., 2004, Workshop on Embedded Security in Cars, P1