Who Should Be Selected to Perform a Task in Crowdsourced Testing?

被引:23
作者
Cui, Qiang [1 ,4 ]
Wang, Junjie [1 ]
Yang, Guowei [2 ]
Xie, Miao [1 ,4 ]
Wang, Qing [1 ,3 ,4 ]
Li, Mingshu [1 ,3 ,4 ]
机构
[1] Chinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
[2] Texas State Univ, Dept Comp Sci, San Marcos, TX USA
[3] Chinese Acad Sci, Inst Software, State Key Lab Comp Sci, Beijing, Peoples R China
[4] Univ Chinese Acad Sci, Beijing, Peoples R China
来源
2017 IEEE 41ST ANNUAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE (COMPSAC), VOL 1 | 2017年
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
D O I
10.1109/COMPSAC.2017.265
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Crowdsourced testing is an emerging trend in software testing, which relies on crowd workers to accomplish test tasks. Due to the cost constraint, a test task usually involves a limited number of crowd workers. Furthermore, more workers does not necessarily result in detecting more bugs. Different workers, who may have different testing experience and expertise, may make much differences in the test outcomes. For example, some inappropriate workers may miss true bug, introduce false bugs or report duplicated bugs, which decreases the test quality. In current practice, a test task is usually dispatched in a random manner, and the quality of testing cannot be guaranteed. Therefore, it is important to select an appropriate subset of workers to perform a test task to ensure high bug detection rate. This paper introduces ExReDiv, a novel hybrid approach to select a set of workers for a test task. It consists of three key strategies: the experience strategy selects experienced workers; the relevance strategy selects workers with expertise relevant to the given test task; the diversity strategy selects diverse workers to avoid detecting duplicated bugs. We evaluate ExReDiv based on 42 test tasks from one of the largest crowdsourced testing platforms in China, and the experimental results show its effectiveness.
引用
收藏
页码:75 / 84
页数:10
相关论文
共 19 条
  • [1] [Anonymous], P ACM SIGSOFT 20 INT
  • [2] Test Code Quality and Its Relation to Issue Handling Performance
    Athanasiou, Dimitrios
    Nugroho, Ariadi
    Visser, Joost
    Zaidman, Andy
    [J]. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2014, 40 (11) : 1100 - 1125
  • [3] Software testing research: Achievements, challenges, dreams
    Bertolino, Antonia
    [J]. FOSE 2007: FUTURE OF SOFTWARE ENGINEERING, 2007, : 85 - 103
  • [4] Quasi-Crowdsourcing Testing for Educational Projects
    Chen, Zhenyu
    Luo, Bin
    [J]. 36TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE COMPANION 2014), 2014, : 272 - 275
  • [5] Chixiang Zhou, 2012, 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012), P61, DOI 10.1109/ICST.2012.86
  • [6] Multi-objective Test Report Prioritization using Image Understanding
    Feng, Yang
    Jones, James A.
    Chen, Zhenyu
    Fang, Chunrong
    [J]. 2016 31ST IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE), 2016, : 202 - 213
  • [7] Test Report Prioritization to Assist Crowdsourced Testing
    Feng, Yang
    Chen, Zhenyu
    Jones, James A.
    Fang, Chunrong
    Xu, Baowen
    [J]. 2015 10TH JOINT MEETING OF THE EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND THE ACM SIGSOFT SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE 2015) PROCEEDINGS, 2015, : 225 - 236
  • [8] Improving Bug Triage with Bug Tossing Graphs
    Jeong, Gaeul
    Kim, Sunghun
    Zimmerman, Thomas
    [J]. 7TH JOINT MEETING OF THE EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND THE ACM SIGSOFT SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2009, : 111 - 120
  • [9] Lafferty John, 2001, P SIGIR, P111, DOI DOI 10.1145/383952.383970
  • [10] Lease M., 2012, Proceedings of the American Society for Information Science and Technology, V49, P1, DOI [DOI 10.1002/MEET.14504901100, 10.1002/meet.14504901100]