Black-Box Testing of Deep Neural Networks through Test Case Diversity

被引:24
作者
Aghababaeyan, Zohreh [1 ]
Abdellatif, Manel [2 ,3 ]
Briand, Lionel [3 ,4 ]
Ramesh, S. [5 ]
Bagherzadeh, Mojtaba [3 ]
机构
[1] Univ Ottawa, Sch EECS, Ottawa, ON K1N 6N5, Canada
[2] Ecole Technol Super, Software & Informat Technol Engn Dept, Montreal, PQ H3C 1K3, Canada
[3] Univ Ottawa, Sch EECS, Ottawa, ON K1N 6N5, Canada
[4] Univ Luxembourg, SnT Ctr Secur Reliabil & Trust, L-4365 Esch Sur Alzette, Luxembourg
[5] Gen Motors, Dept Res & Dev, Warren, MI 48092 USA
基金
加拿大自然科学与工程研究理事会;
关键词
Measurement; Testing; Feature extraction; Closed box; Fault detection; Neurons; Computational modeling; Coverage; deep neural network; diversity; faults; test; CLASSIFICATION; EFFICIENT; DISTANCE;
D O I
10.1109/TSE.2023.3243522
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep Neural Networks (DNNs) have been extensively used in many areas including image processing, medical diagnostics and autonomous driving. However, DNNs can exhibit erroneous behaviours that may lead to critical errors, especially when used in safety-critical systems. Inspired by testing techniques for traditional software systems, researchers have proposed neuron coverage criteria, as an analogy to source code coverage, to guide the testing of DNNs. Despite very active research on DNN coverage, several recent studies have questioned the usefulness of such criteria in guiding DNN testing. Further, from a practical standpoint, these criteria are white-box as they require access to the internals or training data of DNNs, which is often not feasible or convenient. Measuring such coverage requires executing DNNs with candidate inputs to guide testing, which is not an option in many practical contexts. In this paper, we investigate diversity metrics as an alternative to white-box coverage criteria. For the previously mentioned reasons, we require such metrics to be black-box and not rely on the execution and outputs of DNNs under test. To this end, we first select and adapt three diversity metrics and study, in a controlled manner, their capacity to measure actual diversity in input sets. We then analyze their statistical association with fault detection using four datasets and five DNNs. We further compare diversity with state-of-the-art white-box coverage criteria. As a mechanism to enable such analysis, we also propose a novel way to estimate fault detection in DNNs. Our experiments show that relying on the diversity of image features embedded in test input sets is a more reliable indicator than coverage criteria to effectively guide DNN testing. Indeed, we found that one of our selected black-box diversity metrics far outperforms existing coverage criteria in terms of fault-revealing capability and computational time. Results also confirm the suspicions that state-of-the-art coverage criteria are not adequate to guide the construction of test input sets to detect as many faults as possible using natural inputs.
引用
收藏
页码:3182 / 3204
页数:23
相关论文
共 50 条
  • [41] Opening the black box - Data driven visualization of neural networks
    Tzeng, FY
    Ma, KL
    IEEE VISUALIZATION 2005, PROCEEDINGS, 2005, : 383 - 390
  • [42] Novel Black-Box Arc Model Validated by High-Voltage Circuit Breaker Testing
    Ohtaka, Toshiya
    Kerterz, Viktor
    Smeets, Rene Peter Paul
    IEEE TRANSACTIONS ON POWER DELIVERY, 2018, 33 (04) : 1835 - 1844
  • [43] Seed Selection for Testing Deep Neural Networks
    Zhi, Yuhan
    Xie, Xiaofei
    Shen, Chao
    Sun, Jun
    Zhang, Xiaoyu
    Guan, Xiaohong
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (01)
  • [44] Behavior Pattern-Driven Test Case Selection for Deep Neural Networks
    Chen, Yanshan
    Wang, Ziyuan
    Wang, Dong
    Yao, Yongming
    Chen, Zhenyu
    2019 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING (AITEST), 2019, : 89 - 90
  • [45] Detection Tolerant Black-Box Adversarial Attack Against Automatic Modulation Classification With Deep Learning
    Qi, Peihan
    Jiang, Tao
    Wang, Lizhan
    Yuan, Xu
    Li, Zan
    IEEE TRANSACTIONS ON RELIABILITY, 2022, 71 (02) : 674 - 686
  • [46] Comparing Offline and Online Testing of Deep Neural Networks: An Autonomous Car Case Study
    Ul Haq, Fitash
    Shin, Donghwan
    Nejati, Shiva
    Briand, Lionel C.
    2020 IEEE 13TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VALIDATION AND VERIFICATION (ICST 2020), 2020, : 85 - 95
  • [47] Can Offline Testing of Deep Neural Networks Replace Their Online Testing?A Case Study of Automated Driving Systems
    Fitash Ul Haq
    Donghwan Shin
    Shiva Nejati
    Lionel Briand
    Empirical Software Engineering, 2021, 26
  • [48] Can Offline Testing of Deep Neural Networks Replace Their Online Testing? A Case Study of Automated Driving Systems
    Ul Haq, Fitash
    Shin, Donghwan
    Nejati, Shiva
    Briand, Lionel
    EMPIRICAL SOFTWARE ENGINEERING, 2021, 26 (05)
  • [49] Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?
    Gaur, Manas
    Faldu, Keyur
    Sheth, Amit
    IEEE INTERNET COMPUTING, 2021, 25 (01) : 51 - 59
  • [50] Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing
    Wang, Jingyi
    Dong, Guoliang
    Sun, Jun
    Wang, Xinyu
    Zhang, Peixin
    2019 IEEE/ACM 41ST INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2019), 2019, : 1245 - 1256