Black-box adversarial sample generation based on differential evolution

被引:31
作者
Lin, Junyu [1 ,2 ]
Xu, Lei [1 ,2 ]
Liu, Yingqi [3 ]
Zhang, Xiangyu [3 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Nanjing Univ, Dept Comp Sci & Technol, Nanjing, Peoples R China
[3] Purdue Univ, Dept Comp Sci, W Lafayette, IN 47907 USA
关键词
Adversarial samples; Differential evolution; Black-box testing; Deep Neural Network;
D O I
10.1016/j.jss.2020.110767
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep Neural Networks (DNNs) are being used in various daily tasks such as object detection, speech processing, and machine translation. However, it is known that DNNs suffer from robustness problems - perturbed inputs called adversarial samples leading to misbehaviors of DNNs. In this paper, we propose a black-box technique called Black-box Momentum Iterative Fast Gradient Sign Method (BMI-FGSM) to test the robustness of DNN models. The technique does not require any knowledge of the structure or weights of the target DNN. Compared to existing white-box testing techniques that require accessing model internal information such as gradients, our technique approximates gradients through Differential Evolution and uses approximated gradients to construct adversarial samples. Experimental results show that our technique can achieve 100% success in generating adversarial samples to trigger misclassification, and over 95% success in generating samples to trigger misclassification to a specific target output label. It also demonstrates better perturbation distance and better transferability. Compared to the state-of-the-art black-box technique, our technique is more efficient. Furthermore, we conduct testing on the commercial Aliyun API and successfully trigger its misbehavior within a limited number of queries, demonstrating the feasibility of real-world black-box attack. (C) 2020 Elsevier Inc. All rights reserved.
引用
收藏
页数:11
相关论文
共 48 条
[1]   GenAttack: Practical Black-box Attacks with Gradient-Free Optimization [J].
Alzantot, Moustafa ;
Sharma, Yash ;
Chakraborty, Supriyo ;
Zhang, Huan ;
Hsieh, Cho-Jui ;
Srivastava, Mani B. .
PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'19), 2019, :1111-1119
[2]  
[Anonymous], 2009, HDB SYSTEMIC AUTOIMM
[3]  
[Anonymous], 2014, STRIVING SIMPLICITY
[4]  
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, DOI 10.48550/ARXIV.1409.0473]
[5]  
Bojarski Mariusz, 2016, arXiv
[6]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[7]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448
[8]  
Cheng M., 2019, 7 INT C LEARN REPR
[9]   Differential Evolution: A Survey of the State-of-the-Art [J].
Das, Swagatam ;
Suganthan, Ponnuthurai Nagaratnam .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2011, 15 (01) :4-31
[10]   Differential Evolution Using a Neighborhood-Based Mutation Operator [J].
Das, Swagatam ;
Abraham, Ajith ;
Chakraborty, Uday K. ;
Konar, Amit .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2009, 13 (03) :526-553