DeepMutation: Mutation Testing of Deep Learning Systems

被引:249
作者
Ma, Lei [1 ,2 ]
Zhang, Fuyuan [2 ]
Sun, Jiyuan [3 ]
Xue, Minhui [2 ]
Li, Bo [4 ]
Juefei-Xu, Felix [5 ]
Xie, Chao [3 ]
Li, Li [6 ]
Liu, Yang [2 ]
Zhao, Jianjun [3 ]
Wang, Yadong [1 ]
机构
[1] Harbin Inst Technol, Harbin, Heilongjiang, Peoples R China
[2] Nanyang Technol Univ, Singapore, Singapore
[3] Kyushu Univ, Fukuoka, Fukuoka, Japan
[4] Univ Illinois, Urbana, IL 61801 USA
[5] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[6] Monash Univ, Clayton, Vic, Australia
来源
2018 29TH IEEE INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING (ISSRE) | 2018年
基金
国家重点研发计划;
关键词
Deep learning; Software testing; Deep neural networks; Mutation testing;
D O I
10.1109/ISSRE.2018.00021
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep learning (DL) defines a new data-driven programming paradigm where the internal system logic is largely shaped by the training data. The standard way of evaluating DL models is to examine their performance on a test dataset. The quality of the test dataset is of great importance to gain confidence of the trained models. Using an inadequate test dataset, DL models that have achieved high test accuracy may still lack generality and robustness. In traditional software testing, mutation testing is a well-established technique for quality evaluation of test suites, which analyzes to what extent a test suite detects the injected faults. However, due to the fundamental difference between traditional software and deep learning-based software, traditional mutation testing techniques cannot be directly applied to DL systems. In this paper, we propose a mutation testing framework specialized for DL systems to measure the quality of test data. To do this, by sharing the same spirit of mutation testing in traditional software, we first define a set of source-level mutation operators to inject faults to the source of DL (i.e., training data and training programs). Then we design a set of model-level mutation operators that directly inject faults into DL models without a training process. Eventually, the quality of test data could be evaluated from the analysis on to what extent the injected faults could be detected. The usefulness of the proposed mutation testing techniques is demonstrated on two public datasets, namely MNIST and CIFAR-10, with three DL models.
引用
收藏
页码:100 / 111
页数:12
相关论文
共 74 条
[31]  
Chan WK, 2005, QSIC 2005: FIFTH INTERNATIONAL CONFERENCE ON QUALITY SOFTWARE, PROCEEDINGS, P187
[32]  
Chollet F., 2015, about us
[33]   Exact and Consistent Interpretation for Piecewise Linear Neural Networks: A Closed Form Solution [J].
Chu, Lingyang ;
Hu, Xia ;
Hu, Juhua ;
Wang, Lanjun ;
Pei, Jian .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :1244-1253
[34]  
Chuanming Jing, 2008, 2008 22nd International Conference on Advanced Information Networking and Applications - Workshops, P667, DOI 10.1109/AINA.2008.98
[35]  
Coles H., 2016, P 25 INT S SOFTWARE, P449, DOI DOI 10.1145/2931037.2948707
[36]  
DeMillo R. A., 1988, Proceedings of the Second Workshop on Software Testing, Verification, and Analysis (Cat. No.88TH0225-3), P142, DOI 10.1109/WST.1988.5369
[37]   HINTS ON TEST DATA SELECTION - HELP FOR PRACTICING PROGRAMMER [J].
DEMILLO, RA ;
LIPTON, RJ .
COMPUTER, 1978, 11 (04) :34-41
[38]   CONSTRAINT-BASED AUTOMATIC TEST DATA GENERATION [J].
DEMILLO, RA ;
OFFUTT, AJ .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 1991, 17 (09) :900-910
[39]   Improving Mutation Testing Process of Python']Python Programs [J].
Derezinska, Anna ;
Halas, Konrad .
SOFTWARE ENGINEERING IN INTELLIGENT SYSTEMS (CSOC2015), VOL 3, 2015, 349 :233-242
[40]  
Ferrari Fabiano Cutigi, 2008, 2008 First IEEE International Conference on Software Testing, Verification and Validation (ICST '08), P52, DOI 10.1109/ICST.2008.37