Black-box adversarial sample generation based on differential evolution

被引:31
作者
Lin, Junyu [1 ,2 ]
Xu, Lei [1 ,2 ]
Liu, Yingqi [3 ]
Zhang, Xiangyu [3 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Nanjing Univ, Dept Comp Sci & Technol, Nanjing, Peoples R China
[3] Purdue Univ, Dept Comp Sci, W Lafayette, IN 47907 USA
关键词
Adversarial samples; Differential evolution; Black-box testing; Deep Neural Network;
D O I
10.1016/j.jss.2020.110767
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep Neural Networks (DNNs) are being used in various daily tasks such as object detection, speech processing, and machine translation. However, it is known that DNNs suffer from robustness problems - perturbed inputs called adversarial samples leading to misbehaviors of DNNs. In this paper, we propose a black-box technique called Black-box Momentum Iterative Fast Gradient Sign Method (BMI-FGSM) to test the robustness of DNN models. The technique does not require any knowledge of the structure or weights of the target DNN. Compared to existing white-box testing techniques that require accessing model internal information such as gradients, our technique approximates gradients through Differential Evolution and uses approximated gradients to construct adversarial samples. Experimental results show that our technique can achieve 100% success in generating adversarial samples to trigger misclassification, and over 95% success in generating samples to trigger misclassification to a specific target output label. It also demonstrates better perturbation distance and better transferability. Compared to the state-of-the-art black-box technique, our technique is more efficient. Furthermore, we conduct testing on the commercial Aliyun API and successfully trigger its misbehavior within a limited number of queries, demonstrating the feasibility of real-world black-box attack. (C) 2020 Elsevier Inc. All rights reserved.
引用
收藏
页数:11
相关论文
共 48 条
[11]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[12]   Robust Physical-World Attacks on Deep Learning Visual Classification [J].
Eykholt, Kevin ;
Evtimov, Ivan ;
Fernandes, Earlence ;
Li, Bo ;
Rahmati, Amir ;
Xiao, Chaowei ;
Prakash, Atul ;
Kohno, Tadayoshi ;
Song, Dawn .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1625-1634
[13]  
Goodfellow Ian, 2018, ARXIV PREPRINT ARXIV
[14]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[15]  
Goodfellow R, 2015, OIL AND GAS PIPELINES: INTEGRITY AND SAFETY HANDBOOK, P3
[16]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[17]  
ILYAS A., 2019, 7 INT C LEARN REPR
[18]  
Ilyas Andrew, 2018, P MACHINE LEARNING R, V80
[19]  
Jere M., 2019, ARXIV191202316
[20]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90