Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

被引:44
作者
Melis, Marco [1 ]
Demontis, Ambra [1 ]
Biggio, Battista [1 ,2 ]
Brown, Gavin [3 ]
Fumera, Giorgio [1 ]
Roli, Fabio [1 ,2 ]
机构
[1] Univ Cagliari, Dept Elect & Elect Engn, Cagliari, Italy
[2] Pluribus One, Cagliari, Italy
[3] Univ Manchester, Sch Comp Sci, Manchester, Lancs, England
来源
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017) | 2017年
关键词
SECURITY;
D O I
10.1109/ICCVW.2017.94
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains. It has however been shown that they can be fooled by adversarial examples, i.e., images altered by a barely-perceivable adversarial noise, carefully crafted to mislead classification. In this work, we aim to evaluate the extent to which robot-vision systems embodying deep-learning algorithms are vulnerable to adversarial examples, and propose a computationally efficient countermeasure to mitigate this threat, based on rejecting classification of anomalous inputs. We then provide a clearer understanding of the safety properties of deep networks through an intuitive empirical analysis, showing that the mapping learned by such networks essentially violates the smoothness assumption of learning algorithms. We finally discuss the main limitations of this work, including the creation of real-world adversarial examples, and sketch promising research directions.
引用
收藏
页码:751 / 759
页数:9
相关论文
共 31 条
[1]  
[Anonymous], 2015, ARXIV151106292
[2]  
[Anonymous], 2014, ICLR
[3]  
[Anonymous], 2015, Machine Learning for Interactive Systems
[4]  
[Anonymous], 2016, arXiv preprint arXiv:1608.07690
[5]   The security of machine learning [J].
Barreno, Marco ;
Nelson, Blaine ;
Joseph, Anthony D. ;
Tygar, J. D. .
MACHINE LEARNING, 2010, 81 (02) :121-148
[6]   Towards Open Set Deep Networks [J].
Bendale, Abhijit ;
Boult, Terrance E. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1563-1572
[7]  
Biggio B., 2012, ARXIV PREPRINT ARXIV
[8]  
Biggio B., 2013, P 2013 EUROPEAN C MA, DOI DOI 10.1007/978-3-642-40994-3_25
[9]   One-and-a-Half-Class Multiple Classifier Systems for Secure Learning Against Evasion Attacks at Test Time [J].
Biggio, Battista ;
Corona, Igino ;
He, Zhi-Min ;
Chan, Patrick P. K. ;
Giacinto, Giorgio ;
Yeung, Daniel S. ;
Roli, Fabio .
MULTIPLE CLASSIFIER SYSTEMS (MCS 2015), 2015, 9132 :168-180
[10]   Security Evaluation of Pattern Classifiers under Attack [J].
Biggio, Battista ;
Fumera, Giorgio ;
Roli, Fabio .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2014, 26 (04) :984-996