Advanced Crowdsourced Test Report Prioritization Based on Adaptive Strategy

被引:4
作者
Zhu, Penghua [1 ]
Li, Ying [1 ]
Li, Tongyu [2 ]
Ren, Huimin [3 ]
Sun, Xiaolei [4 ]
机构
[1] North China Inst Aerosp Engn, Langfang 065099, Peoples R China
[2] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210093, Peoples R China
[3] Beijing Aerosp Automat Control Inst, Beijing 100854, Peoples R China
[4] Wuhu Inst Technol, Wuhu 241002, Peoples R China
关键词
Task analysis; Greedy algorithms; Crowdsourcing; Software algorithms; Software testing; Encoding; Software; Crowdsourced software testing; test report prioritization; text classification;
D O I
10.1109/ACCESS.2022.3176086
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Crowdsourced testing is an emerging trend in software testing, which takes advantage of the efficiency of crowdsourced and cloud platforms. Crowdsourced testing has gradually been applied in many fields. In crowdsourced software testing, after the crowdsourced workers complete the test tasks, they submit the test results in test reports. Therefore, in crowdsourced software testing, checking a large number of test reports is an arduous but unavoidable software maintenance task. Crowdsourced test reports are numerous, complex, and need to be sorted to improve inspection efficiency. There are no systematic methods for prioritizing reports in crowdsourcing test report prioritization. However, in regression testing, test case prioritization technology has matured. Therefore, we migrate the test case prioritization method to crowdsourced test report prioritization and evaluate the effectiveness of these methods. We use natural language processing technology and word segmentation to process the text in the test reports. Then we use four methods to prioritize the reports: total greedy algorithm, additional greedy algorithm, genetic algorithm, and ART. The results show that these methods all perform well in prioritizing crowdsourced test reports, with an average APFD of more than 0.8.
引用
收藏
页码:53522 / 53532
页数:11
相关论文
共 38 条
  • [2] Che W., 2010, COLING 2010 DEMONSTR, P13
  • [3] Automatic test report augmentation to assist crowdsourced testing
    Chen, Xin
    Jiang, He
    Chen, Zhenyu
    He, Tieke
    Nie, Liming
    [J]. FRONTIERS OF COMPUTER SCIENCE, 2019, 13 (05) : 943 - 959
  • [4] Cui Q., 2017, P SEKE, VVolume 17, P218
  • [5] Who Should Be Selected to Perform a Task in Crowdsourced Testing?
    Cui, Qiang
    Wang, Junjie
    Yang, Guowei
    Xie, Miao
    Wang, Qing
    Li, Mingshu
    [J]. 2017 IEEE 41ST ANNUAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE (COMPSAC), VOL 1, 2017, : 75 - 84
  • [6] Crowdsourcing GUI Tests
    Dolstra, Eelco
    Vliegendhart, Raynor
    Pouwelse, Johan
    [J]. 2013 IEEE SIXTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST 2013), 2013, : 332 - 341
  • [7] Towards an integrated crowdsourcing definition
    Estelles-Arolas, Enrique
    Gonzalez-Ladron-de-Guevara, Fernando
    [J]. JOURNAL OF INFORMATION SCIENCE, 2012, 38 (02) : 189 - 200
  • [8] Chinese word segmentation and its effect on information retrieval
    Foo, S
    Li, H
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2004, 40 (01) : 161 - 190
  • [9] Successes, challenges, and rethinking - an industrial investigation on crowdsourced mobile application testing
    Gao, Ruizhi
    Wang, Yabin
    Feng, Yang
    Chen, Zhenyu
    Wong, W. Eric
    [J]. EMPIRICAL SOFTWARE ENGINEERING, 2019, 24 (02) : 537 - 561
  • [10] A Unified Test Case Prioritization Approach
    Hao, Dan
    Zhang, Lingming
    Zhang, Lu
    Rothermel, Gregg
    Mei, Hong
    [J]. ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2014, 24 (02)