De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks

被引:60
作者
Chen, Jian [1 ]
Zhang, Xuxin [1 ]
Zhang, Rui [2 ]
Wang, Chen [1 ]
Liu, Ling [3 ]
机构
[1] Huazhong Univ Sci & Technol, Internet Technol & Engn Res & Dev Ctr ITEC, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[2] Wuhan Univ Technol, Sch Comp Sci & Technol, Hubei Key Lab Transportat Internet Things, Wuhan 430070, Peoples R China
[3] Georgia Inst Technol, Coll Comp, Atlanta, GA 30332 USA
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Data models; Training; Testing; Predictive models; Computational modeling; Training data; Task analysis; Machine learning; data poisoning attack; attack-agnostic defense; generative adversarial network;
D O I
10.1109/TIFS.2021.3080522
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Machine learning techniques have been widely applied to various applications. However, they are potentially vulnerable to data poisoning attacks, where sophisticated attackers can disrupt the learning procedure by injecting a fraction of malicious samples into the training dataset. Existing defense techniques against poisoning attacks are largely attack-specific: they are designed for one specific type of attacks but do not work for other types, mainly due to the distinct principles they follow. Yet few general defense strategies have been developed. In this paper, we propose De-Pois, an attack-agnostic defense against poisoning attacks. The key idea of De-Pois is to train a mimic model the purpose of which is to imitate the behavior of the target model trained by clean samples. We take advantage of Generative Adversarial Networks (GANs) to facilitate informative training data augmentation as well as the mimic model construction. By comparing the prediction differences between the mimic model and the target model, De-Pois is thus able to distinguish the poisoned samples from clean ones, without explicit knowledge of any ML algorithms or types of poisoning attacks. We implement four types of poisoning attacks and evaluate De-Pois with five typical defense methods on different realistic datasets. The results demonstrate that De-Pois is effective and efficient for detecting poisoned data against all the four types of poisoning attacks, with both the accuracy and F1-score over 0.9 on average.
引用
收藏
页码:3412 / 3425
页数:14
相关论文
共 43 条
[1]  
Alfeld S, 2016, AAAI CONF ARTIF INTE, P1452
[2]  
[Anonymous], 2012, P 29 INT COFERENCE I
[3]  
[Anonymous], 2016, P NIPS
[4]  
[Anonymous], 2017, ARXIV171205526
[5]  
[Anonymous], 2017, P ADV NEUR INF PROC
[6]  
[Anonymous], 2017, ARXIV170301340
[7]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[8]  
Biggio Battista, 2013, Machine Learning and Knowledge Discovery in Databases. European Conference, ECML PKDD 2013. Proceedings: LNCS 8190, P387, DOI 10.1007/978-3-642-40994-3_25
[9]  
Biggio B., 2011, P ACML, P97
[10]   Wild patterns: Ten years after the rise of adversarial machine learning [J].
Biggio, Battista ;
Roli, Fabio .
PATTERN RECOGNITION, 2018, 84 :317-331