共 30 条
- [1] SZEGEDY C, ZAREMBA W, SUTSKEVER I., Intriguing properties of neural networks, The 2nd International Conference on Learning Representations, 4, pp. 3861-3864, (2014)
- [2] CARLINI N, WAGNER D., Towards evaluating the robustness of neural networks, Symposium on Security and Privacy, 5, pp. 39-57, (2017)
- [3] GOODFELLOW I, SHLENS J, SZEGEDY C., Explaining and harnessing adversarial examples, The 3rd International Conference on Learning Representations, 5, pp. 1353-1362, (2015)
- [4] KURAKIN A, GOODFELLOW I, BENGIO S., Adversarial examples in the physical world, The 5th International Conference on Learning Representations, 4, pp. 1238-1249, (2017)
- [5] TRAMER F, KURAKI A, PAPERNOT N, Et al., Ensemble adversarial training: attacks and defenses, The 6th International Conference on Learning Representations, 5, pp. 131-138, (2018)
- [6] MOOSAVI-DEZFOOLI S, FAWZI A, FROSSARD P., Deepfool: A simple and accurate method to fool deep neural networks, Conference on Computer Vision and Pattern Recognition, 6, pp. 2574-2582, (2016)
- [7] PAPERNOT N, MCDANIEL P, JHA S., The limitations of deep learning in adversarial settings, European Symposium on Security and Privacy, 3, pp. 372-387, (2016)
- [8] SHARIF M, BAUER L, REITER M., On the suitability of L<sub>p</sub>-norms for creating and preventing adversarial examples, 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 6, pp. 1605-1613, (2018)
- [9] RONY J, HAFEMANN L, OLIVEIRA L, Et al., Decoupling direction and norm for efficient gradient-based L<sub>2</sub> adversarial attacks and defenses, Conference on Computer Vision and Pattern Recognition, 6, pp. 4322-4330, (2019)
- [10] GATYS L, ECKER A, BETHGE M., A neural algorithm of artistic style