Black-Box Testing of Deep Neural Networks through Test Case Diversity

被引:24
|
作者
Aghababaeyan, Zohreh [1 ]
Abdellatif, Manel [2 ,3 ]
Briand, Lionel [3 ,4 ]
Ramesh, S. [5 ]
Bagherzadeh, Mojtaba [3 ]
机构
[1] Univ Ottawa, Sch EECS, Ottawa, ON K1N 6N5, Canada
[2] Ecole Technol Super, Software & Informat Technol Engn Dept, Montreal, PQ H3C 1K3, Canada
[3] Univ Ottawa, Sch EECS, Ottawa, ON K1N 6N5, Canada
[4] Univ Luxembourg, SnT Ctr Secur Reliabil & Trust, L-4365 Esch Sur Alzette, Luxembourg
[5] Gen Motors, Dept Res & Dev, Warren, MI 48092 USA
基金
加拿大自然科学与工程研究理事会;
关键词
Measurement; Testing; Feature extraction; Closed box; Fault detection; Neurons; Computational modeling; Coverage; deep neural network; diversity; faults; test; CLASSIFICATION; EFFICIENT; DISTANCE;
D O I
10.1109/TSE.2023.3243522
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep Neural Networks (DNNs) have been extensively used in many areas including image processing, medical diagnostics and autonomous driving. However, DNNs can exhibit erroneous behaviours that may lead to critical errors, especially when used in safety-critical systems. Inspired by testing techniques for traditional software systems, researchers have proposed neuron coverage criteria, as an analogy to source code coverage, to guide the testing of DNNs. Despite very active research on DNN coverage, several recent studies have questioned the usefulness of such criteria in guiding DNN testing. Further, from a practical standpoint, these criteria are white-box as they require access to the internals or training data of DNNs, which is often not feasible or convenient. Measuring such coverage requires executing DNNs with candidate inputs to guide testing, which is not an option in many practical contexts. In this paper, we investigate diversity metrics as an alternative to white-box coverage criteria. For the previously mentioned reasons, we require such metrics to be black-box and not rely on the execution and outputs of DNNs under test. To this end, we first select and adapt three diversity metrics and study, in a controlled manner, their capacity to measure actual diversity in input sets. We then analyze their statistical association with fault detection using four datasets and five DNNs. We further compare diversity with state-of-the-art white-box coverage criteria. As a mechanism to enable such analysis, we also propose a novel way to estimate fault detection in DNNs. Our experiments show that relying on the diversity of image features embedded in test input sets is a more reliable indicator than coverage criteria to effectively guide DNN testing. Indeed, we found that one of our selected black-box diversity metrics far outperforms existing coverage criteria in terms of fault-revealing capability and computational time. Results also confirm the suspicions that state-of-the-art coverage criteria are not adequate to guide the construction of test input sets to detect as many faults as possible using natural inputs.
引用
收藏
页码:3182 / 3204
页数:23
相关论文
共 50 条
  • [1] Black-box tree test case generation through diversity
    Shahbazi, Ali
    Panahandeh, Mahsa
    Miller, James
    AUTOMATED SOFTWARE ENGINEERING, 2018, 25 (03) : 531 - 568
  • [2] DeepGD: A Multi-Objective Black-Box Test Selection Approach for Deep Neural Networks
    Aghababaeyan, Zohreh
    Abdellatif, Manel
    Dadkhah, Mahboubeh
    Briand, Lionel
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (06)
  • [3] Boosting Black-Box Attack to Deep Neural Networks With Conditional Diffusion Models
    Liu, Renyang
    Zhou, Wei
    Zhang, Tianwei
    Chen, Kangjie
    Zhao, Jun
    Lam, Kwok-Yan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 5207 - 5219
  • [4] Black-Box Adversarial Attack on Graph Neural Networks With Node Voting Mechanism
    Wen, Liangliang
    Liang, Jiye
    Yao, Kaixuan
    Wang, Zhiqiang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (10) : 5025 - 5038
  • [6] A White-Box Testing for Deep Neural Networks Based on Neuron Coverage
    Yu, Jing
    Duan, Shukai
    Ye, Xiaojun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 9185 - 9197
  • [7] Compressing Deep Neural Network: A Black-Box System Identification Approach
    Sahu, Ishan
    Pal, Arpan
    Ukil, Arijit
    Majumdar, Angshul
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [8] Test4Deep: an Effective White-box Testing for Deep Neural Networks
    Yu, Jing
    Fu, Yao
    Zheng, Yanan
    Zheng, Wang
    Ye, Xiaojun
    2019 22ND IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND ENGINEERING (IEEE CSE 2019) AND 17TH IEEE INTERNATIONAL CONFERENCE ON EMBEDDED AND UBIQUITOUS COMPUTING (IEEE EUC 2019), 2019, : 16 - 23
  • [9] TEASMA: A Practical Methodology for Test Adequacy Assessment of Deep Neural Networks
    Abbasishahkoo, Amin
    Dadkhah, Mahboubeh
    Briand, Lionel
    Lin, Dayi
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2024, 50 (12) : 3307 - 3329
  • [10] Revizor: Testing Black-Box CPUs Against Speculation Contracts
    Oleksenko, Oleksii
    Fetzer, Christof
    Kopf, Boris
    Silberstein, Mark
    IEEE MICRO, 2023, 43 (04) : 37 - 44