Towards a Classification Model for Tasks in Crowdsourcing

被引:2
|
作者
Alabduljabbar, Reham [1 ]
Al-Dossari, Hmood [2 ]
机构
[1] King Saud Univ, Coll Comp & Informat Sci, Informat Technol Dept, Riyadh, Saudi Arabia
[2] King Saud Univ, Coll Comp & Informat Sci, Informat Syst Dept, Riyadh, Saudi Arabia
来源
PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON INTERNET OF THINGS, DATA AND CLOUD COMPUTING (ICC 2017) | 2017年
关键词
Crowdsourcing; Classification; Task; Amazon MTurk; Quality Control; SYSTEMS; MANAGEMENT; ISSUES;
D O I
10.1145/3018896.3018916
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Crowdsourcing is an increasingly popular approach for utilizing the power of the crowd in performing tasks that cannot be solved sufficiently by machines. Text annotation and image labeling are two examples of crowdsourcing tasks that are difficult to automate and human knowledge is often required. However, the quality of the obtained outcome from the crowdsourcing is still problematic. To obtain high-quality results, different quality control mechanisms should be applied to evaluate the different type of tasks. In a previous work, we present a task ontology-based model that can be utilized to identify which quality mechanism is most appropriate based on the task type. In this paper, we complement our previous work by providing a categorization of crowdsourcing tasks. That is, we define the most common task types in the crowdsourcing context. Then, we show how machine learning algorithms can be used to infer automatically the type of the crowdsourced task.
引用
收藏
页数:7
相关论文
共 50 条
  • [21] Annotator rationales for labeling tasks in crowdsourcing
    Kutlu M.
    McDonnell T.
    Elsayed T.
    Lease M.
    Journal of Artificial Intelligence Research, 2020, 69 : 143 - 189
  • [22] Assignment Techniques for Crowdsourcing Sensitive Tasks
    Celis, L. Elisa
    Reddy, Sai Praneeth
    Singh, Ishaan Preet
    Vaya, Shailesh
    ACM CONFERENCE ON COMPUTER-SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING (CSCW 2016), 2016, : 836 - 847
  • [23] DESIGNING COMPLEX CROWDSOURCING APPLICATIONS COVERING MULTIPLE PLATFORMS AND TASKS
    Bozzon, Alessandro
    Brambilla, Marco
    Ceri, Stefano
    Mauri, Andrea
    Volonterio, Riccardo
    JOURNAL OF WEB ENGINEERING, 2015, 14 (5-6): : 443 - 473
  • [24] WorP: A Novel Worker Performance Prediction Model for General Tasks on Crowdsourcing Platforms
    Xing, Qianli
    Zhao, Weiliang
    Yang, Jian
    Wu, Jia
    Wang, Qi
    2021 IEEE INTERNATIONAL CONFERENCE ON WEB SERVICES, ICWS 2021, 2021, : 161 - 166
  • [25] sloWCrowd: A crowdsourcing tool for lexicographic tasks
    Fiser, Darja
    Tavcar, Ales
    Erjavec, Tomaz
    LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014, : 3471 - 3475
  • [26] Increasing cheat robustness of crowdsourcing tasks
    Carsten Eickhoff
    Arjen P. de Vries
    Information Retrieval, 2013, 16 : 121 - 137
  • [27] An Explorative Approach for Crowdsourcing Tasks Design
    Brambilla, Marco
    Ceri, Stefano
    Mauri, Andrea
    Volonterio, Riccardo
    WWW'15 COMPANION: PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB, 2015, : 1125 - 1130
  • [28] Beyond Micro-Tasks: Research Opportunities in Observational Crowdsourcing
    Lukyanenko, Roman
    Parsons, Jeffrey
    JOURNAL OF DATABASE MANAGEMENT, 2018, 29 (01) : 1 - 22
  • [29] Toward Effective Tasks Navigation in Crowdsourcing
    Kucherbaev, Pavel
    Daniel, Florian
    Marchese, Maurizio
    Casati, Fabio
    Reavey, Brian
    PROCEEDINGS OF THE 2014 INTERNATIONAL WORKING CONFERENCE ON ADVANCED VISUAL INTERFACES, AVI 2014, 2014, : 401 - 404
  • [30] Towards Model Driven Crowdsourcing: First Experiments, Methodology and Transformation
    Vale, Samyr
    2014 IEEE 15TH INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION (IRI), 2014, : 211 - 218