Towards Practical Certifiable Patch Defense with Vision Transformer

被引:46
作者
Chen, Zhaoyu [1 ]
Li, Bo [2 ]
Xu, Jianghe [2 ]
Wu, Shuang [2 ]
Ding, Shouhong [2 ]
Zhang, Wenqiang [1 ]
机构
[1] Fudan Univ, Acad Engn & Technol, Shanghai, Peoples R China
[2] Tencent Youtu Lab, Shenzhen, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
D O I
10.1109/CVPR52688.2022.01472
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Patch attacks, one of the most threatening forms of physical attack in adversarial examples, can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous region. Certifiable patch defense can guarantee robustness that the classifier is not affected by patch attacks. Existing certifiable patch defenses sacrifice the clean accuracy of classifiers and only obtain a low certified accuracy on toy datasets. Furthermore, the clean and certified accuracy of these methods is still significantly lower than the accuracy of normal classification networks, which limits their application in practice. To move towards a practical certifiable patch defense, we introduce Vision Transformer (ViT) into the framework of Derandomized Smoothing (DS). Specifically, we propose a progressive smoothed image modeling task to train Vision Transformer, which can capture the more discriminable local context of an image while preserving the global semantic information. For efficient inference and deployment in the real world, we innovatively reconstruct the global self-attention structure of the original ViT into isolated band unit self-attention. On ImageNet, under 2% area patch attacks our method achieves 41.70% certified accuracy, a nearly 1-fold increase over the previous best method (26.00%). Simultaneously, our method achieves 78.58% clean accuracy, which is quite close to the normal ResNet-101 accuracy. Extensive experiments show that our method obtains state-of-the-art clean and certified accuracy with inferring efficiently on CIFAR-10 and ImageNet.
引用
收藏
页码:15127 / 15137
页数:11
相关论文
共 45 条
[1]  
Anandkumar Aishan, 2021, ADVM 21 P 1 INT WORK, P35
[2]  
[Anonymous], 2009, CIFAR-100 Dataset
[3]  
Bao H., 2021, P ICLR, P1
[4]  
Brendel W., 2019, INT C LEARN REPR
[5]  
Brown Tom B., 2017, Adversarial patch. arXiv, P2
[6]  
Bruna J., 2014, INT C LEARN REPR
[7]  
Chen Zhiyang, 2021, CORR ABS210714467, P1
[8]  
Chiang Ping-Yeh, 2020, 8 INT C LEARNING REP
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]  
Ding L, 2021, AAAI CONF ARTIF INTE, V35, P1236