Detecting Poisoning Attacks on Machine Learning in IoT Environments

被引:45
作者
Baracaldo, Nathalie [1 ]
Chen, Bryant [1 ]
Ludwig, Heiko [1 ]
Safavi, Amir [1 ]
Zhang, Rui [1 ]
机构
[1] IBM Almaden Res Ctr, San Jose, CA 95120 USA
来源
2018 IEEE INTERNATIONAL CONGRESS ON INTERNET OF THINGS (ICIOT) | 2018年
关键词
D O I
10.1109/ICIOT.2018.00015
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Machine Learning (ML) plays an increasing role in Internet of Things (IoT), both in the Cloud and at the Edge, using trained models for applications from factory automation to environmental sensing. However, using ML in IoT environments presents unique security challenges. In particular, adversaries can manipulate the training data by tampering with sensors' measurements. This type of attack, known as a poisoning attack has been shown to significantly decrease overall performance, cause targeted misclassification or bad behavior, and insert "backdoors" and "neural trojans". Taking advantage of recently developed tamper-free provenance frameworks, we present a methodology that uses contextual information about the origin and transformation of data points in the training set to identify poisonous data. Our approach works with or without a trusted test data set. Using the proposed approach poisoning attacks can be effectively detected and mitigated in IoT environments with reliable provenance information.
引用
收藏
页码:57 / 64
页数:8
相关论文
共 23 条
[1]  
Aman M.N., 2017, ACM IoTPTS, P11
[2]  
[Anonymous], 2011, AS C MACH LEARN
[3]  
[Anonymous], 2011, AISEC 11
[4]  
Baracaldo N., 2017, AISEC 17
[5]  
Baracaldo N., 2017, SERVICE ORIENTED COM
[6]   The security of machine learning [J].
Barreno, Marco ;
Nelson, Blaine ;
Joseph, Anthony D. ;
Tygar, J. D. .
MACHINE LEARNING, 2010, 81 (02) :121-148
[7]  
Biggio B., 2014, IEEE T KNOWLEDGE DAT
[8]  
Biggio B, 2012, ARXIV12066389
[9]  
Chakarov Aleksandar, 2016, ARXIV160307292
[10]  
Gadelha J., 2008, ESCIENCE 08