De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks

被引:67
作者
Chen, Jian [1 ]
Zhang, Xuxin [1 ]
Zhang, Rui [2 ]
Wang, Chen [1 ]
Liu, Ling [3 ]
机构
[1] Huazhong Univ Sci & Technol, Internet Technol & Engn Res & Dev Ctr ITEC, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[2] Wuhan Univ Technol, Sch Comp Sci & Technol, Hubei Key Lab Transportat Internet Things, Wuhan 430070, Peoples R China
[3] Georgia Inst Technol, Coll Comp, Atlanta, GA 30332 USA
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Data models; Training; Testing; Predictive models; Computational modeling; Training data; Task analysis; Machine learning; data poisoning attack; attack-agnostic defense; generative adversarial network;
D O I
10.1109/TIFS.2021.3080522
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Machine learning techniques have been widely applied to various applications. However, they are potentially vulnerable to data poisoning attacks, where sophisticated attackers can disrupt the learning procedure by injecting a fraction of malicious samples into the training dataset. Existing defense techniques against poisoning attacks are largely attack-specific: they are designed for one specific type of attacks but do not work for other types, mainly due to the distinct principles they follow. Yet few general defense strategies have been developed. In this paper, we propose De-Pois, an attack-agnostic defense against poisoning attacks. The key idea of De-Pois is to train a mimic model the purpose of which is to imitate the behavior of the target model trained by clean samples. We take advantage of Generative Adversarial Networks (GANs) to facilitate informative training data augmentation as well as the mimic model construction. By comparing the prediction differences between the mimic model and the target model, De-Pois is thus able to distinguish the poisoned samples from clean ones, without explicit knowledge of any ML algorithms or types of poisoning attacks. We implement four types of poisoning attacks and evaluate De-Pois with five typical defense methods on different realistic datasets. The results demonstrate that De-Pois is effective and efficient for detecting poisoned data against all the four types of poisoning attacks, with both the accuracy and F1-score over 0.9 on average.
引用
收藏
页码:3412 / 3425
页数:14
相关论文
共 43 条
[21]   Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning [J].
Jagielski, Matthew ;
Oprea, Alina ;
Biggio, Battista ;
Liu, Chang ;
Nita-Rotaru, Cristina ;
Li, Bo .
2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2018, :19-35
[22]   Stronger data poisoning attacks break data sanitization defenses [J].
Koh, Pang Wei ;
Steinhardt, Jacob ;
Liang, Percy .
MACHINE LEARNING, 2022, 111 (01) :1-47
[23]  
Liu K., 2018, LECT NOTES COMPUT SC, P273, DOI DOI 10.1007/978-3-030-00470-5_13
[24]  
Mei SK, 2015, AAAI CONF ARTIF INTE, P2871
[25]   Attack under Disguise: An Intelligent Data Poisoning Attack Mechanism in Crowdsourcing [J].
Miao, Chenglin ;
Li, Qi ;
Su, Lu ;
Huai, Mengdi ;
Jiang, Wenjun ;
Gao, Jing .
WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), 2018, :13-22
[26]  
Mirza Mehdi, 2014, ARXIV
[27]  
Munoz-Gonzalez L., 2020, ARXIV200300040
[28]  
Muñoz-González L, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P27, DOI 10.1145/3128572.3140451
[29]  
Munoz-Gonzalez Luis, 2019, ARXIV190607773
[30]  
Paudice A., 2018, EUR C MACH LEARN KNO, P5