White-box content camouflage attacks against deep learning

被引:0
作者
Chen, Tianrong [1 ]
Ling, Jie [1 ]
Sun, Yuping [1 ]
机构
[1] School of Computer science, Guangdong University of Technology, Guangzhou, China
基金
中国国家自然科学基金;
关键词
Content camouflage - Deep learning - Learning models - Pre-processing - Pre-processing algorithms - Processing modules - Recent researches - Research fields - White box - White-box attack;
D O I
暂无
中图分类号
学科分类号
摘要
Deep learning has achieved remarkable success in a wide range of computer vision tasks. However, recent researches suggest that deep learning systems are vulnerable to a variety of attacks. Security concerns have been raised regarding the training or inference phase of deep learning models in the last few years, and the research field about the vulnerability of the pre-processing components in these models is still developing. In this paper, we systematically examine white-box content camouflage attacks on five types of pre-processing modules in deep learning systems: scaling, sharpening, Gamma correction, contrast adjustment, and saturation adjustment. We assume that an attacker's goal is to generate camouflage examples that show inconsistent visual semantics before and after pre-processing. Under the white-box setting (where the pre-processing algorithms and their parameters are known), we formulate content camouflage attacks as an optimization problem in which perceptual losses in the source and target images are smoothly calculated by a multi-scale discriminator to improve the camouflaging effect of the attack example. We evaluate our content camouflage attacks by conducting a series of experiments on two example groups as well as two real-world datasets, i.e., CIFAR-10 and FER-2013. The experimental results show that with good camouflaging ability, our attacks are effective against deep learning systems, and outperform prevalent scaling camouflage attacks by generating examples with better quality and a higher attack success rate. The proposed camouflage attacks are also extended to the four commonly used pre-processing algorithms, and yield good results. Furthermore, we discuss the effect of varying the parameters of several image pre-processing algorithms under our attacks and analyze` the reasons for their vulnerability. © 2022
引用
收藏
相关论文
empty
未找到相关数据