CrowdCog:ACognitive Skill basedSystem for Heterogeneous Task Assignment and Recommendation in Crowdsourcing

被引:17
作者
Hettiachchi D. [1 ]
Van Berkel N. [2 ]
Kostakos V. [1 ]
Goncalves J. [1 ]
机构
[1] University of Melbourne, Melbourne, VIC
[2] Aalborg University, Aalborg
关键词
cognitive abilities; crowdsourcing; dynamic task assignment;
D O I
10.1145/3415181
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
While crowd workers typically complete a variety of tasks in crowdsourcing platforms, there is no widely accepted method to successfully match workers to different types of tasks. Researchers have considered using worker demographics, behavioural traces, and prior task completion records to optimise task assignment. However, optimum task assignment remains a challenging research problem due to limitations of proposed approaches, which in turn can have a significant impact on the future of crowdsourcing. We present 'CrowdCog', an online dynamic system that performs both task assignment and task recommendations, by relying on fast-paced online cognitive tests to estimate worker performance across a variety of tasks. Our work extends prior work that highlights the effect of workers' cognitive ability on crowdsourcing task performance. Our study, deployed on Amazon Mechanical Turk, involved 574 workers and 983 HITs that span across four typical crowd tasks (Classification, Counting, Transcription, and Sentiment Analysis). Our results show that both our assignment method and recommendation method result in a significant performance increase (5% to 20%) as compared to a generic or random task assignment. Our findings pave the way for the use of quick cognitive tests to provide robust recommendations and assignments to crowd workers. © 2020 ACM.
引用
收藏
相关论文
共 68 条
[1]  
Sampath H.A., Rajeshuni R., Indurkhya B., Cognitively InspiredTask design to improve user performance on crowdsourcing platforms, Proceedings of the Sigchi Conference on Human Factors in Computing Systems, pp. 3665-3674, (2014)
[2]  
Allahbakhsh M., Benatallah B., Ignjatovic A., Motahari-Nezhad H.R., Bertino E., Dustdar S., Quality control in crowdsourcing systems: Issues and directions, Ieee Internet Computing, 17, 2, pp. 76-81, (2013)
[3]  
Ambati V., Vogel S., Carbonell J., Towards task recommendation in micro-task markets, Human ComputationWorkshop at the Twenty-Fifth Aaai Conference on Artificial Intelligence., (2011)
[4]  
Assadi S., Hsu J., Jabbari S., Online assignment of heterogeneous tasks in crowdsourcing markets, Proceedings of the Third Aaai Conference on Human Computation and Crowdsourcing (HCOMP'15)., (2015)
[5]  
Bailey C.E., Cognitive accuracy and intelligent executive function in the brain and in business, Annals of the New York Academy of Sciences, 1, pp. 122-141, (2007)
[6]  
Bernstein M.S., Karger D.R., Miller R.C., Brandt J., Analytic methods for optimizing realtime crowdsourcing, Proceedings of Collective Intelligence 2012, (2012)
[7]  
Boim R., Greenshpan O., Milo T., Novgorodov S., Polyzotis N., Tan W., Asking the right questions in crowd data sourcing, 2012 Ieee 28th International Conference on Data Engineering., pp. 1261-1264, (2012)
[8]  
Borella E., Carretti B., Pelegrina S., The specific role of inhibition in reading comprehension in good and poor comprehenders, Journal of Learning Disabilities, 43, 6, pp. 541-552, (2010)
[9]  
Chilton L.B., Horton J.J., Miller R.C., Azenkot S., Task search in a human computation market, Proceedings of the Acm SIGKDDWorkshop on Human Computation., pp. 1-9, (2010)
[10]  
Crump M.J.C., McDonnell J.V., Gureckis T.M., Evaluating amazon's mechanical turk as a tool for experimental behavioral research, PLoS One, 8, 3, pp. 1-18, (2013)