Enhancing Valid Test Input Generation with Distribution Awareness for Deep Neural Networks

被引:0
作者
Zhang, Jingyu [1 ]
Keung, Jacky [1 ]
Ma, Xiaoxue [1 ]
Li, Xiangyu [2 ]
Xiao, Yan [3 ]
Li, Yishu [1 ]
Chan, Wing Kwong [1 ]
机构
[1] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
[2] McGill Univ, Dept Elect & Comp Engn, Montreal, PQ, Canada
[3] Sun Yat Sen Univ, Sch Cyber Sci & Technol, Shenzhen Campus, Shenzhen, Peoples R China
来源
2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024 | 2024年
关键词
Input Validation; Anomaly Detection; Deep Learning; Software Testing;
D O I
10.1109/COMPSAC61105.2024.00148
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Comprehensive testing is important in improving the reliability of Deep Learning (DL)-based systems. Various Test Input Generators (TIGs) have been proposed to generate misbehavior-inducing test inputs. However, the lack of validity checking in TIGs often results in the generation of invalid inputs (i.e., out of the learned distribution), leading to unreliable testing. To save the effort of manually checking the validity and improve test efficiency, it is important to assess the effectiveness and reliability of automated validators. In this study, we comprehensively assess four automated Input Validators (IVs). Our findings show that the accuracy of IVs ranges from 49% to 77%. Distance-based IVs generally outperform reconstruction-based and density-based IVs for both classification and regression tasks. Based on the findings, we enhance existing testing frameworks by incorporating distribution awareness through joint optimization. The results demonstrate our framework leads to a 2% to 10% increase in the number of valid inputs, which establishes our method as an effective technique for valid test input generation.
引用
收藏
页码:1095 / 1100
页数:6
相关论文
共 37 条
  • [1] GANomaly: Semi-supervised Anomaly Detection via Adversarial Training
    Akcay, Samet
    Atapour-Abarghouei, Amir
    Breckon, Toby P.
    [J]. COMPUTER VISION - ACCV 2018, PT III, 2019, 11363 : 622 - 637
  • [2] An J., 2015, SPECIAL LECT IE, V2, P1, DOI DOI 10.1007/BF00758335
  • [3] Bojarski M, 2016, Arxiv, DOI arXiv:1604.07316
  • [4] Ciresan D., 2012, Advances in Neural Information Processing Systems, V25, P2843, DOI DOI 10.5555/2999325.2999452
  • [5] Ciresan D, 2012, PROC CVPR IEEE, P3642, DOI 10.1109/CVPR.2012.6248110
  • [6] Dietterich T., 1995, Ph.D. dissertation, Ph. D. Thesis
  • [7] Distribution-Aware Testing of Neural Networks Using Generative Models
    Dola, Swaroopa
    Dwyer, Matthew B.
    Soffa, Mary Lou
    [J]. 2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2021), 2021, : 226 - 237
  • [8] Dunn I, 2020, Arxiv, DOI arXiv:2001.11055
  • [9] Exposing Previously Undetectable Faults in Deep Neural Networks
    Dunn, Isaac
    Pouget, Hadrien
    Kroening, Daniel
    Melham, Tom
    [J]. ISSTA '21: PROCEEDINGS OF THE 30TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, 2021, : 56 - 66
  • [10] DLFuzz: Differential Fuzzing Testing of Deep Learning Systems
    Guo, Jianmin
    Jiang, Yu
    Zhao, Yue
    Chen, Quan
    Sun, Jiaguang
    [J]. ESEC/FSE'18: PROCEEDINGS OF THE 2018 26TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2018, : 739 - 743