A Model-Based Test Script Generation Framework and Industrial Insight

被引:0
|
作者
Muhammad Nouman Zafar [1 ]
Wasif Afzal [1 ]
Eduard Paul Enoiu [1 ]
Zulqarnain Haider [2 ]
Inderjeet Singh [2 ]
机构
[1] Mälardalen University,
[2] Alstom Rail Sweden AB,undefined
关键词
Model-based testing; Test script generation; Case study; Industrial survey;
D O I
10.1007/s42979-025-03823-7
中图分类号
学科分类号
摘要
Model-based testing (MBT) generates test cases through a model representing the software under test (SUT). The generated abstract test cases need to be transformed into concrete or executable test scripts. Despite the benefits offered by MBT, its industrial adoption is slow. This paper aims to propose a Model-Based Test scrIpt GenEration fRamework (TIGER) based on GraphWalker (GW), an open-source MBT tool, to evaluate the accuracy of generated test scripts to reflect real-world scenarios defined by the model, and to report on the findings of an industrial survey on MBT adoption. We have validated the robustness of the TIGER using an industrial case study from Alstom Rail AB, Sweden. We have injected faults into the model of the SUT based on three mutation operators to generate faulty test scripts. The aim of generating faulty test scripts is to produce failing test steps and to guarantee the absence of faults in the SUT. Moreover, we have also generated the test scripts using the correct version of the model and executed them to analyze the behavior of the generated test scripts in comparison with manually written test scripts. The experimental results show that the generated test scripts are executable, provide 100% requirements coverage, and can be used to uncover faults at the software-in-the-loop simulation level of system testing. Additionally, the analysis of the survey data reveals that MBT can address most of the identified testing challenges, but there remain certain barriers to its adoption.
引用
收藏
相关论文
共 50 条
  • [31] Model-based test suite generation for graph transformation system using model simulation and search-based techniques
    Kalaee, Akram
    Rafe, Vahid
    INFORMATION AND SOFTWARE TECHNOLOGY, 2019, 108 : 1 - 29
  • [32] A Model-Based Framework For Cloud API Testing
    Wang, Junyi
    Bai, Xiaoying
    Li, Linyi
    Ji, Zhicheng
    Ma, Haoran
    2017 IEEE 41ST ANNUAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE (COMPSAC), VOL 2, 2017, : 60 - 65
  • [33] TOM: A Model-Based GUI Testing Framework
    Pinto, Miguel
    Goncalves, Marcelo
    Masci, Paolo
    Campos, Jose Creissac
    FORMAL ASPECTS OF COMPONENT SOFTWARE (FACS 2017), 2017, 10487 : 155 - 161
  • [34] Test Oracle Strategies for Model-Based Testing
    Li, Nan
    Offutt, Jeff
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2017, 43 (04) : 372 - 395
  • [35] Model-Based Test Adaptation for Smart TVs
    Firat, Atil
    Azimi, Mohammad Yusaf
    Elgun, Celal Cagin
    Erata, Ferhat
    Yilmaz, Cemal
    3RD ACM/IEEE INTERNATIONAL CONFERENCE ON AUTOMATION OF SOFTWARE TEST (AST 2022), 2022, : 52 - 53
  • [36] Some Thoughts on Model-Based Test Optimization
    Liu, Pan
    Li, Yudong
    Li, Zhaojun
    2019 COMPANION OF THE 19TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY (QRS-C 2019), 2019, : 268 - 274
  • [37] A Novel Algorithm for Attacking Path Explosion in Model-Based Test Generation for Data Flow Coverage
    Kolchin, Alexander
    2018 IEEE FIRST INTERNATIONAL CONFERENCE ON SYSTEM ANALYSIS & INTELLIGENT COMPUTING (SAIC), 2018, : 226 - 230
  • [38] Behaviour Pattern-Based Model Generation for Model-Based Testing
    Kanstren, Teemu
    2009 COMPUTATION WORLD: FUTURE COMPUTING, SERVICE COMPUTATION, COGNITIVE, ADAPTIVE, CONTENT, PATTERNS, 2009, : 233 - 241
  • [39] A KNOWLEDGE MANAGEMENT APPROACH FOR INDUSTRIAL MODEL-BASED TESTING
    Koznov, Dmitrij
    Malinov, Vasily
    Sokhransky, Eugene
    Novikova, Marina
    KMIS 2009: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON KNOWLEDGE MANAGEMENT AND INFORMATION SHARING, 2009, : 200 - +
  • [40] UI-Test: A Model-Based Framework for Visual UI Testing- Qualitative and Quantitative Evaluation
    Alba, Bryan
    Fernanda Granda, Maria
    Parra, Otto
    EVALUATION OF NOVEL APPROACHES TO SOFTWARE ENGINEERING (ENASE 2021), 2022, 1556 : 328 - 355