Adversarial Examples on Object Recognition: A Comprehensive Survey

被引:93
作者
Serban, Alex [1 ]
Poll, Erik [1 ]
Visser, Joost [2 ]
机构
[1] Radboud Univ Nijmegen, Toernooiveld 212, NL-6525 EC Nijmegen, Netherlands
[2] Leiden Univ, Niels Bohrweg 1, NL-2333 CA Leiden, Netherlands
关键词
Adversarial examples; machine learning; security; robustness; SECURITY; CLASSIFIERS;
D O I
10.1145/3398394
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep neural networks are at the forefront of machine learning research. However, despite achieving impressive performance on complex tasks, they can be very sensitive: Small perturbations of inputs can be sufficient to induce incorrect behavior. Such perturbations, called adversarial examples, are intentionally designed to test the network's sensitivity to distribution drifts. Given their surprisingly small size, a wide body of literature conjectures on their existence and how this phenomenon can be mitigated. In this article, we discuss the impact of adversarial examples on security, safety, and robustness of neural networks. We start by introducing the hypotheses behind their existence, the methods used to construct or protect against them, and the capacity to transfer adversarial examples between different machine learning models. Altogether, the goal is to provide a comprehensive and self-contained survey of this growing field of research.
引用
收藏
页数:38
相关论文
共 186 条
[1]  
Abbasi Mahdieh, 2017, ARXIV170206856
[2]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[3]  
[Anonymous], 2018, ARXIV181011793
[4]  
[Anonymous], 2018, ARXIV180511090
[5]  
[Anonymous], 2015, ARXIV151105122
[6]  
[Anonymous], 2018, P ICLR
[7]  
[Anonymous], 2016, P EUROS P
[8]  
[Anonymous], 2017, ARXIV170403453
[9]  
[Anonymous], 2014, Convex Optimiza- tion
[10]  
[Anonymous], 2017, ARXIV