Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser

被引:584
作者
Liao, Fangzhou [1 ]
Liang, Ming [1 ]
Dong, Yinpeng [1 ]
Pang, Tianyu [1 ]
Hu, Xiaolin [1 ]
Zhu, Jun [1 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing Natl Res Ctr Informat Sci & Technol, Tsinghua Lab Brain & Intelligence,BNRist Lab, Beijing 100084, Peoples R China
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
基金
北京市自然科学基金;
关键词
D O I
10.1109/CVPR.2018.00191
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin.
引用
收藏
页码:1778 / 1787
页数:10
相关论文
共 34 条
[1]  
[Anonymous], 2016, ARXIV161105431
[2]  
[Anonymous], 2015, ARXIV PREPRINT ARXIV
[3]  
[Anonymous], PROC CVPR IEEE
[4]  
[Anonymous], 2016, SOK SCI SECURITY PRI
[5]  
[Anonymous], 2008, ICML 08, DOI 10.1145/1390156.1390294
[6]  
[Anonymous], 2017, AISEC
[7]  
[Anonymous], 2017, P 5 INT C LEARN REPR
[8]  
[Anonymous], 2017, Biologically inspired protection of deep networks from adversarial attacks
[9]  
[Anonymous], 2018, INT C LEARNING REPRE
[10]  
[Anonymous], 2015, PROC 28 INT C NEURAL