TESRAC: A Framework for Test Suite Reduction Assessment at Scale

被引:0
作者
Becho, Joao [1 ,2 ]
Cerveira, Frederico [3 ]
Leitao, Joao [4 ,5 ]
Oliveira, Rui Andre [4 ,5 ]
机构
[1] Univ Lisbon, LASIGE, Lisbon, Portugal
[2] Univ Lisbon, FCUL, Lisbon, Portugal
[3] Univ Coimbra, Dept Informat Engn, CISUC, Coimbra, Portugal
[4] NOVA Univ Lisbon, NOVA LINCS, Lisbon, Portugal
[5] NOVA Univ Lisbon, FCT, Lisbon, Portugal
来源
2022 IEEE 15TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST 2022) | 2022年
关键词
software testing; test suite reduction; test suite minimization; test case prioritization; evaluation; SELECTION;
D O I
10.1109/ICST53961.2022.00028
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Regression testing is an important task in any large software project, however as codebase increases, test suites grow and become composed of highly redundant test cases, thus greatly increasing the lime required for testing. To solve this problem various test suite reduction tools have been proposed, however their absolute and relative performance are unclear to their prospective users, since there is a lack of a standardized evaluation or approach for choosing the best reduction tool. This work proposes TESRAC, a framework for assessing and comparing test suite reduction tools, which allows users to evaluate and rank a customizable set of tools in terms of reduction performance according to criteria (coverage, dimension, and execution time), and which can be configured to prioritize specific criteria. We used TESRAC to assess and compare three test suite reduction tools and one test suite prioritization tool that has been adapted to perform test suite reduction, across eleven projects of various dimensions and characteristics. Results show that a test suite prioritization tool can be adapted to perform a adequate test suite reduction, and a subset of tools outperforms the remaining tools for the majority of the projects. However, the project and test suite being reduced can have a strong impact on a tool's performance.
引用
收藏
页码:174 / 184
页数:11
相关论文
共 21 条
  • [1] Andrews J.H., 2006, INT WORKSHOP RANDOM, P36
  • [2] Coles Henry, 2020, PIT MUTATION TESTING
  • [3] Dadeau F., 2007, Proceedings of the 2nd International Workshop on Random Testing: Co-Located with the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE 2007), RT07, (New York, NY, USA), P18
  • [4] Fraser G., 2011, P 19 ACM SIGSOFT S 1, P416
  • [5] Whole Test Suite Generation
    Fraser, Gordon
    Arcuri, Andrea
    [J]. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2013, 39 (02) : 276 - 291
  • [6] Regression test selection for Java']Java software
    Harrold, MJ
    Jones, JA
    Li, TY
    Liang, DL
    Orso, A
    Pennings, M
    Sinha, S
    Spoon, SA
    Gujarathi, A
    [J]. ACM SIGPLAN NOTICES, 2001, 36 (11) : 312 - 326
  • [7] Horgan J. R., 1992, Proceedings of the Second Symposium on Assessment of Quality Software Development Tools (Cat. No.92TH0415-0), P2, DOI 10.1109/AQSDT.1992.205829
  • [8] Kauffman J. M., 2012, 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012), P907, DOI 10.1109/ICST.2012.194
  • [9] A survey on Test Suite Reduction frameworks and tools
    Khan, Saif Ur Rehman
    Lee, Sai Peck
    Ahmad, Raja Wasim
    Akhunzada, Adnan
    Chang, Victor
    [J]. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2016, 36 (06) : 963 - 975
  • [10] Gaining confidence on dependability benchmarks' conclusions through "back-to-back" testing
    Martinez, Miquel
    de Andres, David
    Ruiz, Juan-Carlos
    [J]. 2014 TENTH EUROPEAN DEPENDABLE COMPUTING CONFERENCE (EDCC), 2014, : 130 - 137