GradFuzz: Fuzzing deep neural networks with gradient vector coverage for adversarial examples

被引:5
|
作者
Park, Leo Hyun [1 ]
Chung, Soochang [1 ]
Kim, Jaeuk [1 ]
Kwon, Taekyoung [1 ]
机构
[1] Yonsei Univ, Grad Sch Informat, Informat Secur Lab, Seoul 03722, South Korea
基金
新加坡国家研究基金会;
关键词
Deep learning security; Coverage -guided DNN fuzzing; Gradient vector coverage;
D O I
10.1016/j.neucom.2022.12.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are susceptible to adversarial attacks that add perturbations to the input data, leading to misclassification errors and causing machine-learning systems to fail. For defense, adver-sarial training leverages possible crashing inputs, i.e., adversarial examples; but, the input space of DNNs is enormous and high-dimensional, making it difficult to find in a wide range. Coverage-guided fuzzing is promising in this respect. However, this leaves the question of what coverage metrics are appropriate for DNNs. We observed that the abilities of existing coverage metrics are limited. They lack gradual guidance toward crashes because of a simple search for a wide neuron activation area. None of the existing approaches can simultaneously achieve high crash quantity, high crash diversity, and efficient fuzzing time. Apart from this, the evaluation methodologies adopted by state-of-the-art fuzzers need rigorous improvements. To address these problems, we present a new DNN fuzzer named GradFuzz. Our idea is the gradient vector coverage, which provides gradual guidance to misclassified categories. We imple-mented our system and performed experiments under rigorous evaluation methodologies. Our evalua-tion results indicate that GradFuzz outperforms state-of-the-art DNN fuzzers: GradFuzz can locate a more diverse set of errors, beneficial to adversarial training, on the MNIST and CIFAR-10 datasets without sacrificing both crash quantity and fuzzing efficiency. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:165 / 180
页数:16
相关论文
共 50 条
  • [21] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [22] Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction
    Liang, Bin
    Li, Hongcheng
    Su, Miaoqiang
    Li, Xirong
    Shi, Wenchang
    Wang, Xiaofeng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (01) : 72 - 85
  • [23] Deep neural rejection against adversarial examples
    Angelo Sotgiu
    Ambra Demontis
    Marco Melis
    Battista Biggio
    Giorgio Fumera
    Xiaoyi Feng
    Fabio Roli
    EURASIP Journal on Information Security, 2020
  • [24] Deep neural rejection against adversarial examples
    Sotgiu, Angelo
    Demontis, Ambra
    Melis, Marco
    Biggio, Battista
    Fumera, Giorgio
    Feng, Xiaoyi
    Roli, Fabio
    EURASIP JOURNAL ON INFORMATION SECURITY, 2020, 2020 (01)
  • [25] TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing
    Odena, Augustus
    Olsson, Catherine
    Andersen, David G.
    Goodfellow, Ian
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [26] WHEN CAUSAL INTERVENTION MEETS ADVERSARIAL EXAMPLES AND IMAGE MASKING FOR DEEP NEURAL NETWORKS
    Yang, Chao-Han Huck
    Liu, Yi-Chieh
    Chen, Pin-Yu
    Ma, Xiaoli
    Tsai, Yi-Chang James
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3811 - 3815
  • [27] EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
    Chen, Pin-Yu
    Sharma, Yash
    Zhang, Huan
    Yi, Jinfeng
    Hsieh, Cho-Jui
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 10 - 17
  • [28] A single gradient step finds adversarial examples on random two-layers neural networks
    Bubeck, Sebastien
    Cherapanamjeri, Yeshwanth
    Gidel, Gauthier
    des Combes, Remi Tachet
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [29] Deep Networks with RBF Layers to Prevent Adversarial Examples
    Vidnerova, Petra
    Neruda, Roman
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2018, PT I, 2018, 10841 : 257 - 266
  • [30] Audio Adversarial Examples Generation with Recurrent Neural Networks
    Chang, Kuei-Huan
    Huang, Po-Hao
    Yu, Honggang
    Jin, Yier
    Wang, Ting-Chi
    2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 488 - 493