Cost and Quality in Crowdsourcing Workflows

被引:1
作者
Helouet, Loic [1 ]
Miklos, Zoltan [2 ]
Singh, Rituraj [2 ]
机构
[1] Univ Rennes 1, INRIA Rennes, Rennes, France
[2] Univ Rennes 1, Rennes, France
来源
APPLICATION AND THEORY OF PETRI NETS AND CONCURRENCY (PETRI NETS 2021) | 2021年 / 12734卷
关键词
Crowdsourcing; Data-centric workflows;
D O I
10.1007/978-3-030-76983-3_3
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Crowdsourcing platforms provide tools to replicate and distribute micro tasks (simple, independent work units) to crowds and assemble results. However, real-life problems are often complex: they require to collect, organize or transform data, with quality and costs constraints. This work considers dynamic realization policies for complex crowdsourcing tasks. Workflows provide ways to organize a complex task in phases and guide its realization. The challenge is then to deploy a workflow on a crowd, i.e., allocate workers to phases so that the overall workflow terminates, with good accuracy of results and at a reasonable cost. Standard "static" allocation of work in crowdsourcing affects a fixed number of workers per micro-task to realize and aggregates the results. We define new dynamic worker allocation techniques that consider progress in a workflow, quality of synthesized data, and remaining budget. Evaluation on a benchmark shows that dynamic approaches outperform static ones in terms of cost and accuracy.
引用
收藏
页码:33 / 54
页数:22
相关论文
共 50 条
  • [41] Quality assessment in competition-based software crowdsourcing
    Hu, Zhenghui
    Wu, Wenjun
    Luo, Jie
    Wang, Xin
    Li, Boshu
    FRONTIERS OF COMPUTER SCIENCE, 2020, 14 (06)
  • [42] Quality-Assured Synchronized Task Assignment in Crowdsourcing
    Tu, Jiayang
    Cheng, Peng
    Chen, Lei
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2021, 33 (03) : 1156 - 1168
  • [43] CrowdHeritage: Crowdsourcing for Improving the Quality of Cultural Heritage Metadata
    Kaldeli, Eirini
    Menis-Mastromichalakis, Orfeas
    Bekiaris, Spyros
    Ralli, Maria
    Tzouvaras, Vassilis
    Stamou, Giorgos
    INFORMATION, 2021, 12 (02) : 1 - 18
  • [44] Using Differential Evolution to Estimate Labeler Quality for Crowdsourcing
    Qiu, Chen
    Jiang, Liangxiao
    Cai, Zhihua
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2018, 11013 : 165 - 173
  • [45] Quality assessment in competition-based software crowdsourcing
    Zhenghui Hu
    Wenjun Wu
    Jie Luo
    Xin Wang
    Boshu Li
    Frontiers of Computer Science, 2020, 14
  • [46] Development and Challenges of Crowdsourcing Quality of Experience Evaluation for Multimedia
    Wang, Zhenji
    Tao, Dan
    Liu, Pingping
    BIG DATA COMPUTING AND COMMUNICATIONS, 2015, 9196 : 444 - 452
  • [47] Research on Data Quality Control of Crowdsourcing Annotation: A Survey
    Lu, Jian
    Li, Wei
    Wang, Qingren
    Zhang, Yiwen
    2020 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS (DASC/PICOM/CBDCOM/CYBERSCITECH), 2020, : 201 - 208
  • [48] ANTECEDENTS OF SOLUTION QUALITY IN CROWDSOURCING: THE SPONSOR'S PERSPECTIVE
    Zheng, Haichao
    Xie, Zongxiao
    Hou, Wenhua
    Li, Dahui
    JOURNAL OF ELECTRONIC COMMERCE RESEARCH, 2014, 15 (03): : 212 - 224
  • [49] A Dynamic Selection Approach for Quality Control Mechanisms in Crowdsourcing
    Alabduljabbar, Reham
    Al-Dossari, Hmood
    IEEE ACCESS, 2019, 7 : 38644 - 38656
  • [50] Crowdsourcing quality evaluation strategies based on voting consistency
    Yue, De-Jun
    Yu, Ge
    Shen, De-Rong
    Yu, Xiao-Cong
    Dongbei Daxue Xuebao/Journal of Northeastern University, 2014, 35 (08): : 1097 - 1101