EvoAttack: An Evolutionary Search-Based Adversarial Attack for Object Detection Models

被引:6
作者
Chan, Kenneth [1 ]
Cheng, Betty H. C. [1 ]
机构
[1] Michigan State Univ, Dept Comp Sci & Engn, 428 S Shaw Ln, E Lansing, MI 48824 USA
来源
SEARCH-BASED SOFTWARE ENGINEERING, SSBSE 2022 | 2022年 / 13711卷
关键词
Evolutionary search; Adversarial examples; Machine learning;
D O I
10.1007/978-3-031-21251-2_6
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
State-of-the-art deep neural networks in image classification, recognition, and detection tasks are increasingly being used in a range of real-world applications. Applications include those that are safety critical, where the failure of the system may cause serious harm, injuries, or even deaths. Adversarial examples are expected inputs that are maliciously modified such that the machine learning models fail to classify them correctly. While a number of evolutionary search-based approaches have been developed to generate adversarial examples against image classification problems, evolutionary search-based attacks against object detection algorithms remain unexplored. This paper explores how evolutionary search-based techniques can be used as a black-box, model- and data- agnostic approach to attack state-of-the-art object detection algorithms (e.g., RetinaNet and Faster R-CNN). A proof-of-concept implementation is provided to demonstrate how evolutionary search can generate adversarial examples that existing models fail to correctly process. We applied our approach to benchmark datasets, Microsoft COCO and Waymo Open Dataset, applying minor perturbations to generate adversarial examples that prevented correct model detections and classifications on areas of interest.
引用
收藏
页码:83 / 97
页数:15
相关论文
共 29 条
[1]   GenAttack: Practical Black-box Attacks with Gradient-Free Optimization [J].
Alzantot, Moustafa ;
Sharma, Yash ;
Chakraborty, Supriyo ;
Zhang, Huan ;
Hsieh, Cho-Jui ;
Srivastava, Mani B. .
PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'19), 2019, :1111-1119
[2]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[3]   POBA-GA: Perturbation optimized black-box adversarial attacks via genetic algorithm [J].
Chen, Jinyin ;
Su, Mengmeng ;
Shen, Shijing ;
Xiong, Hui ;
Zheng, Haibin .
COMPUTERS & SECURITY, 2019, 85 :89-106
[4]  
Dutta S., 2021, AISC, V1170, P97, DOI 10.1007/978-981-15-5411
[5]   Robust Physical-World Attacks on Deep Learning Visual Classification [J].
Eykholt, Kevin ;
Evtimov, Ivan ;
Fernandes, Earlence ;
Li, Bo ;
Rahmati, Amir ;
Xiao, Chaowei ;
Prakash, Atul ;
Kohno, Tadayoshi ;
Song, Dawn .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1625-1634
[6]  
Goodfellow I. J., 2015, ICLR
[7]   Poster: Nickel to Lego: Using Foolgle to Create Adversarial Examples to fool Google Cloud Speech-to-Text API [J].
Han, Joon Kuy ;
Kim, Hyoungshick ;
Woo, Simon S. .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :2593-2595
[8]  
Knight JC, 2002, ICSE 2002: PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, P547, DOI 10.1109/ICSE.2002.1007998
[9]   An End-to-End Deep Neural Network for Autonomous Driving Designed for Embedded Automotive Platforms [J].
Kocic, Jelena ;
Jovicic, Nenad ;
Drndarevic, Vujo .
SENSORS, 2019, 19 (09)
[10]   "Know What You Know": Predicting Behavior for Learning-Enabled Systems When Facing Uncertainty [J].
Langford, Michael Austin ;
Cheng, Betty H. C. .
2021 INTERNATIONAL SYMPOSIUM ON SOFTWARE ENGINEERING FOR ADAPTIVE AND SELF-MANAGING SYSTEMS (SEAMS 2021), 2021, :78-89