共 30 条
- [11] Kurakin Alexey, 2017, INT C LEARN REPR ICL
- [12] Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1778 - 1787
- [13] Who's Afraid of Adversarial Queries? The Impact of Image Modifications on Content-based Image Retrieval [J]. ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, : 306 - 314
- [14] Madry A., 2019, Towards deep learning models resistant to adversarial attacks., P1
- [15] Simple Black-Box Adversarial Attacks on Deep Neural Networks [J]. 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1310 - 1318
- [16] Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks [J]. 2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, : 582 - 597
- [17] The Limitations of Deep Learning in Adversarial Settings [J]. 1ST IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, 2016, : 372 - 387
- [18] Performance Measures and a Data Set for Multi-target, Multi-camera Tracking [J]. COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 : 17 - 35
- [19] Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4317 - 4325
- [20] Schroff F, 2015, PROC CVPR IEEE, P815, DOI 10.1109/CVPR.2015.7298682