Can Offline Testing of Deep Neural Networks Replace Their Online Testing? A Case Study of Automated Driving Systems

被引:22
作者
Ul Haq, Fitash [1 ]
Shin, Donghwan [1 ]
Nejati, Shiva [1 ,2 ]
Briand, Lionel [1 ,2 ]
机构
[1] Univ Luxembourg, SnT, Luxembourg, Luxembourg
[2] Univ Ottawa, Ottawa, ON, Canada
基金
欧洲研究理事会; 加拿大自然科学与工程研究理事会; 新加坡国家研究基金会;
关键词
Deep Learning; Testing; Self-driving Cars; SAFETY;
D O I
10.1007/s10664-021-09982-4
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We distinguish two general modes of testing for Deep Neural Networks (DNNs): Offline testing where DNNs are tested as individual units based on test datasets obtained without involving the DNNs under test, and online testing where DNNs are embedded into a specific application environment and tested in a closed-loop mode in interaction with the application environment. Typically, DNNs are subjected to both types of testing during their development life cycle where offline testing is applied immediately after DNN training and online testing follows after offline testing and once a DNN is deployed within a specific application environment. In this paper, we study the relationship between offline and online testing. Our goal is to determine how offline testing and online testing differ or complement one another and if offline testing results can be used to help reduce the cost of online testing? Though these questions are generally relevant to all autonomous systems, we study them in the context of automated driving systems where, as study subjects, we use DNNs automating end-to-end controls of steering functions of self-driving vehicles. Our results show that offline testing is less effective than online testing as many safety violations identified by online testing could not be identified by offline testing, while large prediction errors generated by offline testing always led to severe safety violations detectable by online testing. Further, we cannot exploit offline testing results to reduce the cost of online testing in practice since we are not able to identify specific situations where offline testing could be as accurate as online testing in identifying safety requirement violations.
引用
收藏
页数:30
相关论文
共 40 条
  • [1] [Anonymous], 2017, ARXIV170803309
  • [2] Empirical characterization of random forest variable importance measures
    Archer, Kelfie J.
    Kirnes, Ryan V.
    [J]. COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2008, 52 (04) : 2249 - 2260
  • [3] Autumn T, 2016, AUTUMN MODEL
  • [4] The Oracle Problem in Software Testing: A Survey
    Barr, Earl T.
    Harman, Mark
    McMinn, Phil
    Shahbaz, Muzammil
    Yoo, Shin
    [J]. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2015, 41 (05) : 507 - 525
  • [5] Chauffeur T, 2016, CHAUFFEUR MODEL
  • [6] Ciresan D., 2012, ARXIV12022745
  • [7] Codevilla F., 2018, EUR C COMP VIS ECCV
  • [8] Cohen W. W., 1995, Machine Learning. Proceedings of the Twelfth International Conference on Machine Learning, P115
  • [9] Deng L, 2013, INT CONF ACOUST SPEE, P8599, DOI 10.1109/ICASSP.2013.6639344
  • [10] Dosovitskiy A, 2017, P 1 ANN C ROB LEARN, P1, DOI DOI 10.48550/ARXIV.1711.03938