Mutation-Based White Box Testing of Deep Neural Networks

被引:1
作者
Cetiner, Gokhan [1 ]
Yayan, Ugur [2 ]
Yazici, Ahmet [1 ]
机构
[1] Univ Eskisehir Osmangazi, Comp Engn Dept, TR-26040 Eskisehir, Turkiye
[2] Univ Eskisehir Osmangazi, Software Engn Dept, TR-26040 Eskisehir, Turkiye
关键词
Testing; Artificial neural networks; Robustness; Software testing; Long short term memory; Accuracy; Transformers; Predictive models; Libraries; Convolutional neural networks; Reinforcement learning; Convolutional neural network; deep neural networks; long short-term memory; machine learning; mutation-based testing; reinforcement learning; transformers;
D O I
10.1109/ACCESS.2024.3482114
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Neural Networks (DNNs) are used in many critical areas, such as autonomous vehicles, generative AI systems, etc. Therefore, testing DNNs is vital, especially for models used in critical areas. Mutation-based testing is a very successful technique for testing DNNs by mutating their complex structures. Deep Mutation Module was developed to address mutation-based testing and the robustness challenges of DNNs. It analyses the structures of DNNs in detail. It tests models by applying mutation to parameters and structures using its fault library. Testing DNN structures and detecting faults is a highly complex and open-ended challenge. The method proposed in this study applies mutations to DNN parameters to expose faults and weaknesses in the models, thereby testing their robustness. The paper focuses on mutation-based tests of an Reinforce Learning (RL) model developed for electric vehicle routing, a Long Short-Term Memory (LSTM) model developed for prognostic predictions, and a Transformer-based neural network model for electric vehicle routing tasks. The best mutation scores for the LSTM model were measured as 96%, 91.02%, 71.19%, and 68.77%. The test results for the RL model resulted in mutation scores of 93.20%, 72.13%, 77.47%, 79.28%, and 55.74%. The mutation scores of the Transformer model were 75.87%, 76.36%, and 74.93%. These results show that the module can successfully test the targeted models and generate mutants classified as "survived mutants" that outperform the original models. In this way, it provides critical information to researchers to improve the overall performance of the models. Conducting these tests before using them in real-world applications minimizes faults and maximizes model success.
引用
收藏
页码:160156 / 160174
页数:19
相关论文
共 32 条
[1]   Testing Deep Learning Models: A First Comparative Study of Multiple Testing Techniques [J].
Ahuja, Mohit Kumar ;
Gotlieb, Arnaud ;
Spieker, Helge .
2022 IEEE 15TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW 2022), 2022, :130-137
[2]   Generating Python']Python Mutants From Bug Fixes Using Neural Machine Translation [J].
Asik, Sergen ;
Yayan, Ugur .
IEEE ACCESS, 2023, 11 :85678-85693
[3]  
Ayubi A. H., 2023, P INT C INN INT SYST, P1
[4]  
Chollet F., Reuters Newswire Classification Dataset.
[5]   Software Engineering Meets Deep Learning: A Mapping Study [J].
Ferreira, Fabio ;
Silva, Luciana Lourdes ;
Valente, Marco Tulio .
36TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2021, 2021, :1542-1549
[6]  
github, TensorFlow Models: Transformer
[7]  
github, DNN-Mutator
[8]   DLFuzz: Differential Fuzzing Testing of Deep Learning Systems [J].
Guo, Jianmin ;
Jiang, Yu ;
Zhao, Yue ;
Chen, Quan ;
Sun, Jiaguang .
ESEC/FSE'18: PROCEEDINGS OF THE 2018 26TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2018, :739-743
[9]   Comparing White-box and Black-box Test Prioritization [J].
Henard, Christopher ;
Papadakis, Mike ;
Harman, Mark ;
Jia, Yue ;
Le Traon, Yves .
2016 IEEE/ACM 38TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE), 2016, :523-534
[10]  
Huanhuan Wu, 2021, 2021 8th International Conference on Dependable Systems and Their Applications (DSA), P730, DOI 10.1109/DSA52907.2021.00106