Towards a Classification Model for Tasks in Crowdsourcing

被引:2
|
作者
Alabduljabbar, Reham [1 ]
Al-Dossari, Hmood [2 ]
机构
[1] King Saud Univ, Coll Comp & Informat Sci, Informat Technol Dept, Riyadh, Saudi Arabia
[2] King Saud Univ, Coll Comp & Informat Sci, Informat Syst Dept, Riyadh, Saudi Arabia
来源
PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON INTERNET OF THINGS, DATA AND CLOUD COMPUTING (ICC 2017) | 2017年
关键词
Crowdsourcing; Classification; Task; Amazon MTurk; Quality Control; SYSTEMS; MANAGEMENT; ISSUES;
D O I
10.1145/3018896.3018916
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Crowdsourcing is an increasingly popular approach for utilizing the power of the crowd in performing tasks that cannot be solved sufficiently by machines. Text annotation and image labeling are two examples of crowdsourcing tasks that are difficult to automate and human knowledge is often required. However, the quality of the obtained outcome from the crowdsourcing is still problematic. To obtain high-quality results, different quality control mechanisms should be applied to evaluate the different type of tasks. In a previous work, we present a task ontology-based model that can be utilized to identify which quality mechanism is most appropriate based on the task type. In this paper, we complement our previous work by providing a categorization of crowdsourcing tasks. That is, we define the most common task types in the crowdsourcing context. Then, we show how machine learning algorithms can be used to infer automatically the type of the crowdsourced task.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Managing the Crowd: Towards a Taxonomy of Crowdsourcing Processes
    Geiger, David
    Seedorf, Stefan
    Schulze, Thimo
    Nickerson, Robert
    Schader, Martin
    AMCIS 2011 PROCEEDINGS, 2011,
  • [32] Human-centred design on crowdsourcing annotation towards improving active learning model performance
    Dong, Jing
    Kang, Yangyang
    Liu, Jiawei
    Sun, Changlong
    Fan, Shu
    Jin, Huchong
    Wu, Dan
    Jiang, Zhuoren
    Niu, Xi
    Liu, Xiaozhong
    JOURNAL OF INFORMATION SCIENCE, 2023,
  • [33] Debiased Label Aggregation for Subjective Crowdsourcing Tasks
    Wallace, Shaun
    Cai, Tianyuan
    Le, Brendan
    Leiva, Luis A.
    EXTENDED ABSTRACTS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2022, 2022,
  • [34] Multistep planning for crowdsourcing complex consensus tasks
    Deng, Zixuan
    Xiang, Yanping
    KNOWLEDGE-BASED SYSTEMS, 2021, 231
  • [35] Crowdsourcing as a preprocessing for complex semantic annotation tasks
    Martinez Alonso, Hector
    Romeo, Lauren
    LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014,
  • [36] Trends on Crowdsourcing Java']JavaScript Small Tasks
    Zozas, Ioannis
    Anagnostou, Iason
    Bibi, Stamatia
    ENASE: PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON EVALUATION OF NOVEL APPROACHES TO SOFTWARE ENGINEERING, 2022, : 85 - 94
  • [37] Sustainable Employment in India by Crowdsourcing Enterprise Tasks
    Roy, Shourya
    Balamurugan, Chithralekha
    Gujar, Sujit
    PROCEEDINGS OF THE 3RD ACM SYMPOSIUM ON COMPUTING FOR DEVELOPMENT (ACM DEV 2013), 2013,
  • [38] SmartCrowd: A Workflow Framework for Complex Crowdsourcing Tasks
    Xiong, Tianhong
    Yu, Yang
    Pan, Maolin
    Yang, Jing
    BUSINESS PROCESS MANAGEMENT WORKSHOPS, BPM 2018 INTERNATIONAL WORKSHOPS, 2019, 342 : 387 - 398
  • [39] Toward a Learning Science for Complex Crowdsourcing Tasks
    Doroudi, Shayan
    Kamar, Ece
    Brunskill, Emma
    Horvitz, Eric
    34TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2016, 2016, : 2623 - 2634
  • [40] Use of crowdsourcing in evaluating post-classification accuracy
    Saralioglu, Ekrem
    Gungor, Oguz
    EUROPEAN JOURNAL OF REMOTE SENSING, 2019, 52 (sup1) : 137 - 147