Adversarial Examples Generation Method Based on Texture and Perceptual Color Distance

被引:0
作者
Xu M. [1 ]
Jiang B.-C. [1 ]
机构
[1] The School of Cyberspace, Hangzhou Dianzi University, Hangzhou
来源
Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China | 2021年 / 50卷 / 04期
关键词
Adversarial examples; Automatic hyperparameter optimization; Imperceptible; Perceptual color distance;
D O I
10.12178/1001-0548.2021058
中图分类号
学科分类号
摘要
Ideal adversarial examples should not only successfully deceive the machine learning classifier, but also should not easily be perceived by human vision. In the traditional algorithms, only the norm is adopted as a measurement index of the perturbation size of adversarial examples, which usually leads to the difference in the visibility range. In this paper, a method for adversarial examples generation based on the texture and perceptual color distance is developed. The main idea is to embed the perturbation into a high texture area of an image and optimize the perceptual color distance, so as to reduce the difference in the visibility range between the original image and adversarial example. Moreover, an automatic hyperparameter optimization method is employed to accelerate the convergence of backpropagation. Experimental evaluation shows that the proposed algorithm can obtain the smallest L2 norm and perceptual color distance than other algorithms. Meanwhile, a smaller number of iterations was required to obtain adversarial examples Copyright ©2020 Journal of University of Electronic Science and Technology of China. All rights reserved.
引用
收藏
页码:558 / 564
页数:6
相关论文
共 30 条
  • [1] SZEGEDY C, ZAREMBA W, SUTSKEVER I., Intriguing properties of neural networks, The 2nd International Conference on Learning Representations, 4, pp. 3861-3864, (2014)
  • [2] CARLINI N, WAGNER D., Towards evaluating the robustness of neural networks, Symposium on Security and Privacy, 5, pp. 39-57, (2017)
  • [3] GOODFELLOW I, SHLENS J, SZEGEDY C., Explaining and harnessing adversarial examples, The 3rd International Conference on Learning Representations, 5, pp. 1353-1362, (2015)
  • [4] KURAKIN A, GOODFELLOW I, BENGIO S., Adversarial examples in the physical world, The 5th International Conference on Learning Representations, 4, pp. 1238-1249, (2017)
  • [5] TRAMER F, KURAKI A, PAPERNOT N, Et al., Ensemble adversarial training: attacks and defenses, The 6th International Conference on Learning Representations, 5, pp. 131-138, (2018)
  • [6] MOOSAVI-DEZFOOLI S, FAWZI A, FROSSARD P., Deepfool: A simple and accurate method to fool deep neural networks, Conference on Computer Vision and Pattern Recognition, 6, pp. 2574-2582, (2016)
  • [7] PAPERNOT N, MCDANIEL P, JHA S., The limitations of deep learning in adversarial settings, European Symposium on Security and Privacy, 3, pp. 372-387, (2016)
  • [8] SHARIF M, BAUER L, REITER M., On the suitability of L<sub>p</sub>-norms for creating and preventing adversarial examples, 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 6, pp. 1605-1613, (2018)
  • [9] RONY J, HAFEMANN L, OLIVEIRA L, Et al., Decoupling direction and norm for efficient gradient-based L<sub>2</sub> adversarial attacks and defenses, Conference on Computer Vision and Pattern Recognition, 6, pp. 4322-4330, (2019)
  • [10] GATYS L, ECKER A, BETHGE M., A neural algorithm of artistic style