Test item prioritizing metrics for selective software testing

被引:0
|
作者
Hirayama, M
Mizuno, O [1 ]
Kikuno, T
机构
[1] Toshiba Co Ltd, Software Engn Ctr, Kawasaki, Kanagawa 2128582, Japan
[2] Osaka Univ, Grad Sch Informat Sci & Technol, Suita, Osaka 5650871, Japan
来源
关键词
software resting; selective testing; prioritization;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In order to respond to the active market's needs for software with various new functions, the system testing must be completed within a limited period. Additionally, important faults, which are closely related to essential functions for users or the target system, have to be removed, preferably in system testing. Many techniques have been proposed to date for effective software testing. Among them, selective software testing is one of the most cost effective techniques. However, most of the previous techniques cannot be applied to short-term development and initial development of software with various new functions because much cost is needed for their testing preparation. In this paper, we propose a new method for selective system testing in which priorities assigned to functions play an essential role in the execution of testing. The priorities are determined based on the evaluation results of three metrics for functions: the frequency of use, the complexity of use scenario, and the fault impact to users. Detailed testing instructions are assigned to test items with high priority, and short and ordinal instructions are assigned to those with low priority. The difference in the volume of testing instruction controls the effort of checking test items. As a result of experimental application to actual software testing in a certain company, we have confirmed that the proposed selective system testing can detect both fatal faults related to key functions and critical faults for the system.
引用
收藏
页码:2733 / 2743
页数:11
相关论文
共 50 条
  • [1] Prioritizing Software Anomalies with Software Metrics and Architecture Blueprints A Controlled Experiment
    Guimaraes, Everton
    Garcia, Alessandro
    Figueiredo, Eduardo
    Cai, Yuanfang
    2013 5TH INTERNATIONAL WORKSHOP ON MODELING IN SOFTWARE ENGINEERING (MISE), 2013, : 82 - 88
  • [2] On "Prioritizing Test Cases for Regression Testing"
    Rothermel, Gregg
    Untch, Roland
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2025, 51 (03) : 802 - 807
  • [3] Prioritizing test cases for regression testing
    Rothermel, G
    Untch, RH
    Harrold, MJ
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2001, 27 (10) : 929 - 948
  • [4] Prioritizing Test Cases for Regression Testing of Location-Based Services: Metrics, Techniques, and Case Study
    Zhai, Ke
    Jiang, Bo
    Chan, W. K.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2014, 7 (01) : 54 - 67
  • [5] In-process metrics for software testing
    Kan, SH
    Parrish, J
    Manlove, D
    IBM SYSTEMS JOURNAL, 2001, 40 (01) : 220 - 241
  • [6] Applying software testing metrics to Lapack
    Barnes, DJ
    Hopkins, TR
    APPLIED PARALLEL COMPUTING: STATE OF THE ART IN SCIENTIFIC COMPUTING, 2006, 3732 : 228 - 236
  • [7] Using metrics to improve software testing
    Sorkowitz, A
    ICSM 2005: PROCEEDINGS OF THE 21ST IEEE INTERNATIONAL CONFERENCE ON SOFTWARE MAINTENANCE, 2005, : 725 - 725
  • [8] Using metrics to improve software testing
    Sorkowitz, Alfred
    Product-Focused Software Process Improvement, Proceedings, 2007, 4589 : 405 - 406
  • [9] Consideration of Human Factors for Prioritizing Test Cases for the Software System Test
    Malz, Christoph
    Sommer, Kerstin
    Goehner, Peter
    Vogel-Heuser, Birgit
    ENGINEERING PSYCHOLOGY AND COGNITIVE ERGONOMICS, 2011, 6781 : 303 - 312
  • [10] Towards Test Focus Selection for Integration Testing using Method Level Software Metrics
    Banitaan, Shadi
    Alenezi, Mamdouh
    Nygard, Kendall
    Magel, Kenneth
    PROCEEDINGS OF THE 2013 10TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY: NEW GENERATIONS, 2013, : 343 - 348