共 47 条
[31]
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
[J].
2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP),
2016,
:582-597
[32]
The Limitations of Deep Learning in Adversarial Settings
[J].
1ST IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY,
2016,
:372-387
[33]
DeepXplore: Automated Whitebox Testing of Deep Learning Systems
[J].
PROCEEDINGS OF THE TWENTY-SIXTH ACM SYMPOSIUM ON OPERATING SYSTEMS PRINCIPLES (SOSP '17),
2017,
:1-18
[34]
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses
[J].
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019),
2019,
:4317-4325
[35]
Serebryany K, libfuzzer: A library for coverage-guided fuzz testing (within llvm)
[36]
Shafahi A, 2019, ADV NEUR IN, V32
[37]
Simonyan K, 2015, Arxiv, DOI arXiv:1409.1556
[38]
Sun YC, 2019, Arxiv, DOI arXiv:1803.04792
[39]
Szegedy C., 2014, P INT C LEARN REPR S
[40]
Szegedy Christian, 2015, IEEE C COMPUTER VISI, P1, DOI [10.1109/cvpr.2015.7298594, DOI 10.1109/CVPR.2015.7298594]