Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks

被引:881
作者
Wang, Bolun [1 ,2 ]
Yao, Yuanshun [2 ]
Shan, Shawn [2 ]
Li, Huiying [2 ]
Viswanath, Bimal [3 ]
Zheng, Haitao [2 ]
Zhao, Ben Y. [2 ]
机构
[1] UC Santa Barbara, Santa Barbara, CA 93106 USA
[2] Univ Chicago, Chicago, IL 60637 USA
[3] Virginia Tech, Blacksburg, VA USA
来源
2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019) | 2019年
关键词
D O I
10.1109/SP.2019.00031
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Lack of transparency in deep neural networks (DNNs) make them susceptible to backdoor attacks, where hidden associations or triggers override normal classification to produce unexpected results. For example, a model with a backdoor always identifies a face as Bill Gates if a specific symbol is present in the input. Backdoors can stay hidden indefinitely until activated by an input, and present a serious security risk to many security or safety related applications, e.g., biometric authentication systems or self-driving cars. We present the first robust and generalizable detection and mitigation system for DNN backdoor attacks. Our techniques identify backdoors and reconstruct possible triggers. We identify multiple mitigation techniques via input filters, neuron pruning and unlearning. We demonstrate their efficacy via extensive experiments on a variety of DNNs, against two types of backdoor injection methods identified by prior work. Our techniques also prove robust against a number of variants of the backdoor attack.
引用
收藏
页码:707 / 723
页数:17
相关论文
共 50 条
[1]  
[Anonymous], P IMC
[2]  
[Anonymous], 2014, P CVPR
[3]  
[Anonymous], 2012, NEURAL NETWORKS
[4]  
[Anonymous], P USENIX SEC
[5]  
[Anonymous], 2018, P ICLR
[6]  
[Anonymous], 2017, P ICLR
[7]  
[Anonymous], 2017, P ICCD
[8]  
[Anonymous], 2014, P NDSS
[9]  
[Anonymous], 2018, P RAID
[10]  
[Anonymous], P CCS