Black-box adversarial sample generation based on differential evolution

被引:30
作者
Lin, Junyu [1 ,2 ]
Xu, Lei [1 ,2 ]
Liu, Yingqi [3 ]
Zhang, Xiangyu [3 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Nanjing Univ, Dept Comp Sci & Technol, Nanjing, Peoples R China
[3] Purdue Univ, Dept Comp Sci, W Lafayette, IN 47907 USA
关键词
Adversarial samples; Differential evolution; Black-box testing; Deep Neural Network;
D O I
10.1016/j.jss.2020.110767
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep Neural Networks (DNNs) are being used in various daily tasks such as object detection, speech processing, and machine translation. However, it is known that DNNs suffer from robustness problems - perturbed inputs called adversarial samples leading to misbehaviors of DNNs. In this paper, we propose a black-box technique called Black-box Momentum Iterative Fast Gradient Sign Method (BMI-FGSM) to test the robustness of DNN models. The technique does not require any knowledge of the structure or weights of the target DNN. Compared to existing white-box testing techniques that require accessing model internal information such as gradients, our technique approximates gradients through Differential Evolution and uses approximated gradients to construct adversarial samples. Experimental results show that our technique can achieve 100% success in generating adversarial samples to trigger misclassification, and over 95% success in generating samples to trigger misclassification to a specific target output label. It also demonstrates better perturbation distance and better transferability. Compared to the state-of-the-art black-box technique, our technique is more efficient. Furthermore, we conduct testing on the commercial Aliyun API and successfully trigger its misbehavior within a limited number of queries, demonstrating the feasibility of real-world black-box attack. (C) 2020 Elsevier Inc. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Black-Box Boundary Attack Based on Gradient Optimization
    Yang, Yuli
    Liu, Zishuo
    Lei, Zhen
    Wu, Shuhong
    Chen, Yongle
    ELECTRONICS, 2024, 13 (06)
  • [22] Black-box testing based on colorful taint analysis
    CHEN Kai1
    2State Key Laboratory of Information Security
    3National Engineering Research Center of Information Security
    Science China(Information Sciences), 2012, 55 (01) : 171 - 183
  • [23] Black-box testing based on colorful taint analysis
    Kai Chen
    DengGuo Feng
    PuRui Su
    YingJun Zhang
    Science China Information Sciences, 2012, 55 : 171 - 183
  • [24] Black-box testing based on colorful taint analysis
    Chen Kai
    Feng DengGuo
    Su PuRui
    Zhang YingJun
    SCIENCE CHINA-INFORMATION SCIENCES, 2012, 55 (01) : 171 - 183
  • [25] A black-box adversarial attack strategy with adjustable sparsity and generalizability for deep image classifiers
    Ghosh, Arka
    Mullick, Sankha Subhra
    Datta, Shounak
    Das, Swagatam
    Das, Asit Kr
    Mallipeddi, Rammohan
    PATTERN RECOGNITION, 2022, 122
  • [26] A TEST CASE GENERATION METHOD FOR BLACK-BOX TESTING OF CONCURRENT PROGRAMS
    ARAKAWA, N
    SONEOKA, T
    IEICE TRANSACTIONS ON COMMUNICATIONS, 1992, E75B (10) : 1081 - 1089
  • [27] Imperceptible black-box waveform-level adversarial attack towards automatic speaker recognition
    Xingyu Zhang
    Xiongwei Zhang
    Meng Sun
    Xia Zou
    Kejiang Chen
    Nenghai Yu
    Complex & Intelligent Systems, 2023, 9 : 65 - 79
  • [28] VIWHard: Text adversarial attacks based on important-word discriminator in the hard-label black-box setting
    Zhang, Hua
    Wang, Jiahui
    Gao, Haoran
    Zhang, Xin
    Wang, Huewei
    Li, Wenmin
    NEUROCOMPUTING, 2025, 616
  • [29] Imperceptible black-box waveform-level adversarial attack towards automatic speaker recognition
    Zhang, Xingyu
    Zhang, Xiongwei
    Sun, Meng
    Zou, Xia
    Chen, Kejiang
    Yu, Nenghai
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (01) : 65 - 79
  • [30] GreedyFool: Multi-factor imperceptibility and its application to designing a black-box adversarial attack
    Liu, Hui
    Zhao, Bo
    Ji, Minzhi
    Li, Mengchen
    Liu, Peng
    INFORMATION SCIENCES, 2022, 613 : 717 - 730