Boosting the Adversarial Transferability of Surrogate Models with Dark Knowledge

被引:1
作者
Yang, Dingcheng [1 ,2 ]
Xiao, Zihao [2 ]
Yu, Wenjian [1 ]
机构
[1] Tsinghua Univ, Dept Comp Sci Tech, BNRist, Beijing, Peoples R China
[2] RealAI, Beijing, Peoples R China
来源
2023 IEEE 35TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI | 2023年
关键词
Deep learning; Image classification; Black-box adversarial attack; Transfer-based attack; Dark knowledge;
D O I
10.1109/ICTAI59109.2023.00098
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are vulnerable to adversarial examples. And, the adversarial examples have transferability, which means that an adversarial example for a DNN model can fool another model with a non-trivial probability. This gave birth to the transfer-based attack where the adversarial examples generated by a ate model are used to conduct black-box attacks. There are some work on generating the adversarial examples from a given surrogate model with better transferability. However, training a special surrogate model to generate adversarial examples with better transferability is relatively under-explored. This paper proposes a method for training a surrogate model with dark knowledge to boost the transferability of the adversarial examples generated by the surrogate model. This trained surrogate model is named dark surrogate model (DSM). The proposed method for training a DSM consists of two key components: a teacher model extracting dark knowledge, and the mixing augmentation skill enhancing dark knowledge of training data. We conducted extensive experiments to show that the proposed method can substantially improve the adversarial transferability of surrogate models across different architectures of surrogate models and optimizers for generating adversarial examples, and it can be applied to other scenarios of transfer-based attack that contain dark knowledge, like face verification. Our code is publicly available at https://github.com/ydc123/Dark Surrogate Model.
引用
收藏
页码:627 / 635
页数:9
相关论文
共 30 条
[11]  
Hinton G, 2015, Arxiv, DOI arXiv:1503.02531
[12]  
Huang G. B., 2008, WORKSHOP FACESREAL L, P1
[13]   Densely Connected Convolutional Networks [J].
Huang, Gao ;
Liu, Zhuang ;
van der Maaten, Laurens ;
Weinberger, Kilian Q. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2261-2269
[14]   Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser [J].
Liao, Fangzhou ;
Liang, Ming ;
Dong, Yinpeng ;
Pang, Tianyu ;
Hu, Xiaolin ;
Zhu, Jun .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1778-1787
[15]  
Liu Y., 2017, ICLR
[16]  
Pang T., 2020, P INT C LEARN REPR
[17]   MobileNetV2: Inverted Residuals and Linear Bottlenecks [J].
Sandler, Mark ;
Howard, Andrew ;
Zhu, Menglong ;
Zhmoginov, Andrey ;
Chen, Liang-Chieh .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4510-4520
[18]   Adversarial CAPTCHAs [J].
Shi, Chenghui ;
Xu, Xiaogang ;
Ji, Shouling ;
Bu, Kai ;
Chen, Jianhai ;
Beyah, Raheem ;
Wang, Ting .
IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (07) :6095-6108
[19]  
Springer J., 2021, NeurIPS, V34
[20]  
Szegedy C, 2017, AAAI CONF ARTIF INTE, P4278