GradFuzz: Fuzzing deep neural networks with gradient vector coverage for adversarial examples

被引:5
|
作者
Park, Leo Hyun [1 ]
Chung, Soochang [1 ]
Kim, Jaeuk [1 ]
Kwon, Taekyoung [1 ]
机构
[1] Yonsei Univ, Grad Sch Informat, Informat Secur Lab, Seoul 03722, South Korea
基金
新加坡国家研究基金会;
关键词
Deep learning security; Coverage -guided DNN fuzzing; Gradient vector coverage;
D O I
10.1016/j.neucom.2022.12.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are susceptible to adversarial attacks that add perturbations to the input data, leading to misclassification errors and causing machine-learning systems to fail. For defense, adver-sarial training leverages possible crashing inputs, i.e., adversarial examples; but, the input space of DNNs is enormous and high-dimensional, making it difficult to find in a wide range. Coverage-guided fuzzing is promising in this respect. However, this leaves the question of what coverage metrics are appropriate for DNNs. We observed that the abilities of existing coverage metrics are limited. They lack gradual guidance toward crashes because of a simple search for a wide neuron activation area. None of the existing approaches can simultaneously achieve high crash quantity, high crash diversity, and efficient fuzzing time. Apart from this, the evaluation methodologies adopted by state-of-the-art fuzzers need rigorous improvements. To address these problems, we present a new DNN fuzzer named GradFuzz. Our idea is the gradient vector coverage, which provides gradual guidance to misclassified categories. We imple-mented our system and performed experiments under rigorous evaluation methodologies. Our evalua-tion results indicate that GradFuzz outperforms state-of-the-art DNN fuzzers: GradFuzz can locate a more diverse set of errors, beneficial to adversarial training, on the MNIST and CIFAR-10 datasets without sacrificing both crash quantity and fuzzing efficiency. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:165 / 180
页数:16
相关论文
共 50 条
  • [41] Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics
    Li, Xin
    Li, Fuxin
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5775 - 5783
  • [42] Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks
    Barati, Ramin
    Safabakhsh, Reza
    Rahmati, Mohammad
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7036 - 7042
  • [43] Efficient and Transferable Adversarial Examples from Bayesian Neural Networks
    Gubri, Martin
    Cordy, Maxime
    Papadakis, Mike
    Le Traon, Yves
    Sen, Koushik
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 738 - 748
  • [44] Explaining Adversarial Examples by Local Properties of Convolutional Neural Networks
    Aghdam, Hamed H.
    Heravi, Elnaz J.
    Puig, Domenec
    PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2017), VOL 5, 2017, : 226 - 234
  • [45] Simplicial-Map Neural Networks Robust to Adversarial Examples
    Paluzo-Hidalgo, Eduardo
    Gonzalez-Diaz, Rocio
    Gutierrez-Naranjo, Miguel A.
    Heras, Jonathan
    MATHEMATICS, 2021, 9 (02) : 1 - 16
  • [46] Assessing the Threat of Adversarial Examples on Deep Neural Networks for Remote Sensing Scene Classification: Attacks and Defenses
    Xu, Yonghao
    Du, Bo
    Zhang, Liangpei
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (02): : 1604 - 1617
  • [47] NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
    Li, Yandong
    Li, Lijun
    Wang, Liqiang
    Zhang, Tong
    Gong, Boqing
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [48] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    MACHINE VISION AND APPLICATIONS, 2024, 35 (03)
  • [49] Adversarial image detection in deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Becarelli, Rudy
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (03) : 2815 - 2835
  • [50] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    Machine Vision and Applications, 2024, 35