GradFuzz: Fuzzing deep neural networks with gradient vector coverage for adversarial examples

被引:6
作者
Park, Leo Hyun [1 ]
Chung, Soochang [1 ]
Kim, Jaeuk [1 ]
Kwon, Taekyoung [1 ]
机构
[1] Yonsei Univ, Grad Sch Informat, Informat Secur Lab, Seoul 03722, South Korea
基金
新加坡国家研究基金会;
关键词
Deep learning security; Coverage -guided DNN fuzzing; Gradient vector coverage;
D O I
10.1016/j.neucom.2022.12.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are susceptible to adversarial attacks that add perturbations to the input data, leading to misclassification errors and causing machine-learning systems to fail. For defense, adver-sarial training leverages possible crashing inputs, i.e., adversarial examples; but, the input space of DNNs is enormous and high-dimensional, making it difficult to find in a wide range. Coverage-guided fuzzing is promising in this respect. However, this leaves the question of what coverage metrics are appropriate for DNNs. We observed that the abilities of existing coverage metrics are limited. They lack gradual guidance toward crashes because of a simple search for a wide neuron activation area. None of the existing approaches can simultaneously achieve high crash quantity, high crash diversity, and efficient fuzzing time. Apart from this, the evaluation methodologies adopted by state-of-the-art fuzzers need rigorous improvements. To address these problems, we present a new DNN fuzzer named GradFuzz. Our idea is the gradient vector coverage, which provides gradual guidance to misclassified categories. We imple-mented our system and performed experiments under rigorous evaluation methodologies. Our evalua-tion results indicate that GradFuzz outperforms state-of-the-art DNN fuzzers: GradFuzz can locate a more diverse set of errors, beneficial to adversarial training, on the MNIST and CIFAR-10 datasets without sacrificing both crash quantity and fuzzing efficiency. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:165 / 180
页数:16
相关论文
共 47 条
[1]  
Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
[2]  
Bojarski M, 2016, Arxiv, DOI [arXiv:1604.07316, DOI 10.48550/ARXIV.1604.07316]
[3]  
Carlini N, 2019, Arxiv, DOI arXiv:1902.06705
[4]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[5]   HopSkipJumpAttack: A Query-Efficient Decision-Based Attack [J].
Chen, Jianbo ;
Jordan, Michael, I ;
Wainwright, Martin J. .
2020 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2020), 2020, :1277-1294
[6]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448
[7]  
Dong YZ, 2019, Arxiv, DOI [arXiv:1911.05904, DOI 10.48550/ARXIV.1911.05904]
[8]   Fuzz Testing based Data Augmentation to Improve Robustness of Deep Neural Networks [J].
Gao, Xiang ;
Saha, Ripon K. ;
Prasad, Mukul R. ;
Roychoudhury, Abhik .
2020 ACM/IEEE 42ND INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2020), 2020, :1147-1158
[9]  
Goodfellow I., 2015, P 3 INT C LEARNING R, P1
[10]   Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs [J].
Gulshan, Varun ;
Peng, Lily ;
Coram, Marc ;
Stumpe, Martin C. ;
Wu, Derek ;
Narayanaswamy, Arunachalam ;
Venugopalan, Subhashini ;
Widner, Kasumi ;
Madams, Tom ;
Cuadros, Jorge ;
Kim, Ramasamy ;
Raman, Rajiv ;
Nelson, Philip C. ;
Mega, Jessica L. ;
Webster, R. .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2016, 316 (22) :2402-2410