Test Oracle Strategies for Model-Based Testing

被引:40
|
作者
Li, Nan [1 ]
Offutt, Jeff [2 ]
机构
[1] Medidata Solut, Div Res & Dev, New York, NY 10014 USA
[2] George Mason Univ, Fairfax, VA 22030 USA
关键词
Test oracle; RIPR model; test oracle strategy; test automation; subsumption; model-based testing; MUTATION;
D O I
10.1109/TSE.2016.2597136
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Testers use model-based testing to design abstract tests from models of the system's behavior. Testers instantiate the abstract tests into concrete tests with test input values and test oracles that check the results. Given the same test inputs, more elaborate test oracles have the potential to reveal more failures, but may also be more costly. This research investigates the ability for test oracles to reveal failures. We define ten new test oracle strategies that vary in amount and frequency of program state checked. We empirically compared them with two baseline test oracle strategies. The paper presents several main findings. (1) Test oracles must check more than runtime exceptions because checking exceptions alone is not effective at revealing failures. (2) Test oracles do not need to check the entire output state because checking partial states reveals nearly as many failures as checking entire states. (3) Test oracles do not need to check program states multiple times because checking states less frequently is as effective as checking states more frequently. In general, when state machine diagrams are used to generate tests, checking state invariants is a reasonably effective low cost approach to creating test oracles.
引用
收藏
页码:372 / 395
页数:24
相关论文
共 50 条
  • [1] An Empirical Analysis of Test Oracle Strategies for Model-based Testing
    Li, Nan
    Offutt, Jeff
    2014 IEEE SEVENTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST), 2014, : 363 - 372
  • [2] Model-Based Test Oracle Generation for Automated Unit Testing of Agent Systems
    Padgham, Lin
    Zhang, Zhiyong
    Thangarajah, John
    Miller, Tim
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2013, 39 (09) : 1230 - 1244
  • [3] Industrial Evaluation of Test Suite Generation Strategies for Model-Based Testing
    Blom, Johan
    Jonsson, Bengt
    Nystrom, Sven-Olof
    2016 IEEE NINTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW), 2016, : 209 - 218
  • [4] An automated model-based test oracle for access control systems
    Bertolino, Antonia
    Daoudagh, Said
    Lonetti, Francesca
    Marchetti, Eda
    2018 IEEE/ACM 13TH INTERNATIONAL WORKSHOP ON AUTOMATION OF SOFTWARE TEST (AST), 2018, : 2 - 8
  • [5] Strategies for Prioritizing Test Cases Generated Through Model-Based Testing Approaches
    Silva Ouriques, Joao Felipe
    2015 IEEE/ACM 37TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, VOL 2, 2015, : 879 - 882
  • [6] Killing strategies for model-based mutation testing
    Aichernig, Bernhard K.
    Brandl, Harald
    Joebstl, Elisabeth
    Krenn, Willibald
    Schlick, Rupert
    Tiran, Stefan
    SOFTWARE TESTING VERIFICATION & RELIABILITY, 2015, 25 (08): : 716 - 748
  • [7] AbsCon: A Test Concretizer for Model-based Testing
    Vanhecke, Jeremy
    Devroey, Xavier
    Perrouin, Gilles
    2019 IEEE 12TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW 2019), 2019, : 15 - 22
  • [8] Model-Based Testing Strategies and Their (In) dependence on Syntactic Model Representations
    Peleska, Jan
    Huang, Wen-ling
    CRITICAL SYSTEMS: FORMAL METHODS AND AUTOMATED VERIFICATION, 2016, 9933 : 3 - 21
  • [9] Model-based testing strategies and their (in)dependence on syntactic model representations
    Wen-ling Huang
    Jan Peleska
    International Journal on Software Tools for Technology Transfer, 2018, 20 : 441 - 465
  • [10] Model-based testing strategies and their (in)dependence on syntactic model representations
    Huang, Wen-ling
    Peleska, Jan
    INTERNATIONAL JOURNAL ON SOFTWARE TOOLS FOR TECHNOLOGY TRANSFER, 2018, 20 (04) : 441 - 465