BET: Black-Box Efficient Testing for Convolutional Neural Networks

被引:14
作者
Wang, Jialai [1 ,4 ]
Qiu, Han [2 ,4 ]
Rong, Yi [2 ,5 ]
Ye, Hengkai [3 ]
Li, Qi [1 ,4 ,6 ]
Li, Zongpeng [2 ,4 ]
Zhang, Chao [1 ,4 ,6 ]
机构
[1] Tsinghua Univ, BNRist, Beijing, Peoples R China
[2] Tsinghua Univ, Beijing, Peoples R China
[3] Purdue Univ, W Lafayette, IN 47907 USA
[4] Tsinghua Univ, Inst Network Sci & Cyberspace, Beijing, Peoples R China
[5] Tsinghua Univ, Sch Software, Beijing, Peoples R China
[6] Zhongguancun Lab, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2022 | 2022年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Convolutional Neural Networks; Black-box Testing;
D O I
10.1145/3533767.3534386
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
It is important to test convolutional neural networks (CNNs) to identify defects (e.g. error-inducing inputs) before deploying them in security-sensitive scenarios. Although existing white-box testing methods can effectively test CNN models with high neuron coverage, they are not applicable to privacy-sensitive scenarios where full knowledge of target CNN models is lacking. In this work, we propose a novel Black-box Efficient Testing (BET) method for CNN models. The core insight of BET is that CNNs are generally prone to be affected by continuous perturbations. Thus, by generating such continuous perturbations in a black-box manner, we design a tunable objective function to guide our testing process for thoroughly exploring defects in different decision boundaries of the target CNN models. We further design an efficiency-centric policy to find more error-inducing inputs within a fixed query budget. We conduct extensive evaluations with three well-known datasets and five popular CNN structures. The results show that BET significantly outperforms existing white-box and black-box testing methods considering the effective error-inducing inputs found in a fixed query/inference budget. We further show that the errorinducing inputs found by BET can be used to fine-tune the target model, improving its accuracy by up to 3%.
引用
收藏
页码:164 / 175
页数:12
相关论文
共 50 条
[1]  
Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
[2]  
[Anonymous], 2009, CIFAR-100 Dataset
[3]  
[Anonymous], Tiny imagenet
[4]   Learning with Submodular Functions: A Convex Optimization Perspective [J].
Bach, Francis .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2013, 6 (2-3) :145-373
[5]  
Bojarski M, 2016, Arxiv, DOI [arXiv:1604.07316, DOI 10.48550/ARXIV.1604.07316]
[6]  
Brown I., 2014, Modelling and automatically analysing privacy properties for honest-but-curious adversaries
[7]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[8]  
David Robert, 2021, P MACHINE LEARNING S
[9]   DeepFault: Fault Localization for Deep Neural Networks [J].
Eniser, Hasan Ferit ;
Gerasimou, Simos ;
Sen, Alper .
FUNDAMENTAL APPROACHES TO SOFTWARE ENGINEERING (FASE 2019), 2019, 11424 :171-191
[10]  
Ford N, 2019, PR MACH LEARN RES, V97