APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection

被引:24
作者
Braunegg, A. [1 ]
Chakraborty, Amartya [1 ]
Krumdick, Michael [1 ]
Lape, Nicole [1 ]
Leary, Sara [1 ]
Manville, Keith [1 ]
Merkhofer, Elizabeth [1 ]
Strickhart, Laura [1 ]
Walmer, Matthew [1 ]
机构
[1] Mitre Corp, Mclean, VA 22101 USA
来源
COMPUTER VISION - ECCV 2020, PT XXI | 2020年 / 12366卷
关键词
Adversarial attacks; Adversarial defense; Datasets and evaluation; Object detection;
D O I
10.1007/978-3-030-58589-1_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Physical adversarial attacks threaten to fool object detection systems, but reproducible research on the real-world effectiveness of physical patches and how to defend against them requires a publicly available benchmark dataset. We present APRICOT, a collection of over 1,000 annotated photographs of printed adversarial patches in public locations. The patches target several object categories for three COCO-trained detection models, and the photos represent natural variation in position, distance, lighting conditions, and viewing angle. Our analysis suggests that maintaining adversarial robustness in uncontrolled settings is highly challenging but that it is still possible to produce targeted detections under white-box and sometimes black-box settings. We establish baselines for defending against adversarial patches via several methods, including using a detector supervised with synthetic data and using unsupervised methods such as kernel density estimation, Bayesian uncertainty, and reconstruction error. Our results suggest that adversarial patches can be effectively flagged, both in a high-knowledge, attack-specific scenario and in an unsupervised setting where patches are detected as anomalies in natural images. This dataset and the described experiments provide a benchmark for future research on the effectiveness of and defenses against physical adversarial objects in the wild. The APRICOT project page and dataset are available at apricot.mitre.org.
引用
收藏
页码:35 / 50
页数:16
相关论文
共 32 条
[1]  
Athalye A., 2018, Synthesizing robust adversarial examples in International conference on machine learning, P284
[2]  
Berthelot David, 2019, INT C LEARN REPR
[3]  
Biggio Battista, 2013, Machine Learning and Knowledge Discovery in Databases. European Conference, ECML PKDD 2013. Proceedings: LNCS 8190, P387, DOI 10.1007/978-3-642-40994-3_25
[4]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[5]  
Carlini N, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P3, DOI 10.1145/3128572.3140444
[6]  
Chen S.T., 2018, Shapeshifter: robust physical adversarial attack on faster R-CNN object detector
[7]   FlowNet: Learning Optical Flow with Convolutional Networks [J].
Dosovitskiy, Alexey ;
Fischer, Philipp ;
Ilg, Eddy ;
Haeusser, Philip ;
Hazirbas, Caner ;
Golkov, Vladimir ;
van der Smagt, Patrick ;
Cremers, Daniel ;
Brox, Thomas .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2758-2766
[8]   Robust Physical-World Attacks on Deep Learning Visual Classification [J].
Eykholt, Kevin ;
Evtimov, Ivan ;
Fernandes, Earlence ;
Li, Bo ;
Rahmati, Amir ;
Xiao, Chaowei ;
Prakash, Atul ;
Kohno, Tadayoshi ;
Song, Dawn .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1625-1634
[9]  
Feinman R, 2017, Arxiv, DOI arXiv:1703.00410
[10]   MIT Advanced Vehicle Technology Study: Large-Scale Naturalistic Driving Study of Driver Behavior and Interaction With Automation [J].
Fridman, Lex ;
Brown, Daniel E. ;
Glazer, Michael ;
Angell, William ;
Dodd, Spencer ;
Jenik, Benedikt ;
Terwilliger, Jack ;
Patsekin, Aleksandr ;
Kindelsberger, Julia ;
Ding, Li ;
Seaman, Sean ;
Mehler, Alea ;
Sipperley, Andrew ;
Pettinato, Anthony ;
Seppelt, Bobbie D. ;
Angell, Linda ;
Mehler, Bruce ;
Reimer, Bryan .
IEEE ACCESS, 2019, 7 :102021-102038