QUERY-FREE EMBEDDING ATTACK AGAINST DEEP LEARNING

被引:3
作者
Liu, Yujia [1 ]
Zhang, Weiming [1 ]
Yu, Nenghai [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Anhui, Peoples R China
来源
2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME) | 2019年
关键词
Adversarial examples; image resizing; embedding attack;
D O I
10.1109/ICME.2019.00073
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep neural networks are vulnerable to adversarial examples, subtly perturbed images which can fool networks to output incorrect classification results. To deceive deep learning models, in this paper, instead of utilizing the weakness of networks themselves, we present Embedding Attack, which is to attack the common image resizing operation in the deep learning preprocessing pipeline. By this attack, adversaries can embed a small target image into a benign image to produce adversarial examples without querying the target network. When the adversarial example is resized to the required shape, the embedded target image will be recovered. We design embedding attacks for three common image resizing methods and prove that our algorithms are optimal when the target image can be fully recovered. Furthermore, we design a universal embedding attack that enables adversarial examples to work under different resizing methods.
引用
收藏
页码:380 / 386
页数:7
相关论文
共 20 条
[1]  
[Anonymous], 2016, ARXIV PREPRINT ARXIV
[2]  
[Anonymous], CYPR
[3]  
[Anonymous], 2016, CVPR
[4]  
[Anonymous], IEEE T IMAGE PROCESS
[5]  
[Anonymous], 2012, IEEE SIGNAL PROCESSI
[6]  
[Anonymous], SEC PRIV EUROS P 201
[7]  
[Anonymous], 2015, BMVC
[8]  
[Anonymous], UNIVERSAL ADVER SERI
[9]  
[Anonymous], 2004, IEEE T IMAGE PROCESS
[10]  
[Anonymous], ARXIV171207805