Ex2: Monte Carlo Tree Search-based test inputs prioritization for fuzzing deep neural networks

被引:3
作者
Ye, Aoshuang [1 ]
Wang, Lina [1 ]
Zhao, Lei [1 ]
Ke, Jianpeng [1 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
deep neural networks; fuzz testing; Inputs prioritization; Monte Carlo Tree Search;
D O I
10.1002/int.23072
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fuzzing is considered to be an essential approach to guarantee the reliability of deep neural networks (DNNs) based systems. The DNN fuzzing leverages various inputs prioritization methods to guide the testing process. The current research mainly focus on constructing testing metrics that symbolize the logical representation of the DNN to guide the generation of test cases, which neglects the potential performance brought by implementing heuristic algorithm. Moreover, the straightforward implementation of queue structure can not represent the metamorphic relationships between generated inputs in DNN fuzzing. Therefore, developing the appropriate heuristic algorithm-based inputs prioritization method is critical to improve the performance of DNN fuzzers. In this paper, we propose a Monte Carlo Tree Search (MCTS) based inputs prioritization method called E x 2 $E{x}<^>{2}$ (Exploration and Exploitation) that formulates DNN testing exploration as the sequential decision process. The technique introduces an innovative tree-structure design that schedules inputs from the statistical perspective. Different from traditional DNN testing, the batch pool is maintained in the form of nodes in MCTS. The links between nodes precisely represent the metamorphic relationship between input batches, which indicates the potential value for in-depth search. Furthermore, a novel simulation mechanism is implemented to adapt MCTS in DNN testing, which attain better coverage feedback. The effectiveness of our method is comprehensively investigated on six popular deep learning models from LeNet and VGG families. The comparison experiments are conducted between DeepHunter, TensorFuzz, and DeepSmartFuzzer to demonstrate efficacy on various testing metrics. The experimental results show that the E x 2 $E{x}<^>{2}$ significantly enhance the coverage gain of DNN fuzzing up to 30% against the best performance in comparison groups.
引用
收藏
页码:11966 / 11984
页数:19
相关论文
共 40 条
  • [1] Avery, 2021, AMAZONS ALEXA WILL N
  • [2] A Survey of Monte Carlo Tree Search Methods
    Browne, Cameron B.
    Powley, Edward
    Whitehouse, Daniel
    Lucas, Simon M.
    Cowling, Peter I.
    Rohlfshagen, Philipp
    Tavener, Stephen
    Perez, Diego
    Samothrakis, Spyridon
    Colton, Simon
    [J]. IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, 2012, 4 (01) : 1 - 43
  • [3] Ciresan D, 2012, PROC CVPR IEEE, P3642, DOI 10.1109/CVPR.2012.6248110
  • [4] Coquelin P.-A., 2007, arXiv
  • [5] Dai HJ, 2018, PR MACH LEARN RES, V80
  • [6] Demir S., 2019, ARXIV PREPRINT ARXIV
  • [7] Identifying Implementation Bugs in Machine Learning Based Image Classifiers using Metamorphic Testing
    Dwarakanath, Anurag
    Ahuja, Manish
    Sikand, Samarth
    Rao, Raghotham M.
    Bose, R. P. Jagadeesh Chandra
    Dubash, Neville
    Podder, Sanjay
    [J]. ISSTA'18: PROCEEDINGS OF THE 27TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, 2018, : 118 - 128
  • [8] Edvard, 2020, TESLAS AUTOPILOT BLA
  • [9] Feng Y, 2020, PROCEEDINGS OF THE 29TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2020, P177, DOI 10.1145/3395363.3397357
  • [10] CollAFL: Path Sensitive Fuzzing
    Gan, Shuitao
    Zhang, Chao
    Qin, Xiaojun
    Tu, Xuwen
    Li, Kang
    Pei, Zhongyu
    Chen, Zuoning
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2018, : 679 - 696