GradFuzz: Fuzzing deep neural networks with gradient vector coverage for adversarial examples

被引:6
作者
Park, Leo Hyun [1 ]
Chung, Soochang [1 ]
Kim, Jaeuk [1 ]
Kwon, Taekyoung [1 ]
机构
[1] Yonsei Univ, Grad Sch Informat, Informat Secur Lab, Seoul 03722, South Korea
基金
新加坡国家研究基金会;
关键词
Deep learning security; Coverage -guided DNN fuzzing; Gradient vector coverage;
D O I
10.1016/j.neucom.2022.12.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are susceptible to adversarial attacks that add perturbations to the input data, leading to misclassification errors and causing machine-learning systems to fail. For defense, adver-sarial training leverages possible crashing inputs, i.e., adversarial examples; but, the input space of DNNs is enormous and high-dimensional, making it difficult to find in a wide range. Coverage-guided fuzzing is promising in this respect. However, this leaves the question of what coverage metrics are appropriate for DNNs. We observed that the abilities of existing coverage metrics are limited. They lack gradual guidance toward crashes because of a simple search for a wide neuron activation area. None of the existing approaches can simultaneously achieve high crash quantity, high crash diversity, and efficient fuzzing time. Apart from this, the evaluation methodologies adopted by state-of-the-art fuzzers need rigorous improvements. To address these problems, we present a new DNN fuzzer named GradFuzz. Our idea is the gradient vector coverage, which provides gradual guidance to misclassified categories. We imple-mented our system and performed experiments under rigorous evaluation methodologies. Our evalua-tion results indicate that GradFuzz outperforms state-of-the-art DNN fuzzers: GradFuzz can locate a more diverse set of errors, beneficial to adversarial training, on the MNIST and CIFAR-10 datasets without sacrificing both crash quantity and fuzzing efficiency. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:165 / 180
页数:16
相关论文
共 47 条
[31]   Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks [J].
Papernot, Nicolas ;
McDaniel, Patrick ;
Wu, Xi ;
Jha, Somesh ;
Swami, Ananthram .
2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, :582-597
[32]   The Limitations of Deep Learning in Adversarial Settings [J].
Papernot, Nicolas ;
McDaniel, Patrick ;
Jha, Somesh ;
Fredrikson, Matt ;
Celik, Z. Berkay ;
Swami, Ananthram .
1ST IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, 2016, :372-387
[33]   DeepXplore: Automated Whitebox Testing of Deep Learning Systems [J].
Pei, Kexin ;
Cao, Yinzhi ;
Yang, Junfeng ;
Jana, Suman .
PROCEEDINGS OF THE TWENTY-SIXTH ACM SYMPOSIUM ON OPERATING SYSTEMS PRINCIPLES (SOSP '17), 2017, :1-18
[34]   Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses [J].
Rony, Jerome ;
Hafemann, Luiz G. ;
Oliveira, Luiz S. ;
Ben Ayed, Ismail ;
Sabourin, Robert ;
Granger, Eric .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4317-4325
[35]  
Serebryany K, libfuzzer: A library for coverage-guided fuzz testing (within llvm)
[36]  
Shafahi A, 2019, ADV NEUR IN, V32
[37]  
Simonyan K, 2015, Arxiv, DOI arXiv:1409.1556
[38]  
Sun YC, 2019, Arxiv, DOI arXiv:1803.04792
[39]  
Szegedy C., 2014, P INT C LEARN REPR S
[40]  
Szegedy Christian, 2015, IEEE C COMPUTER VISI, P1, DOI [10.1109/cvpr.2015.7298594, DOI 10.1109/CVPR.2015.7298594]