A server-assigned spatial crowdsourcing framework

被引:93
作者
To, Hien [1 ]
Shahabi, Cyrus [2 ]
Kazemi, Leyla [3 ]
机构
[1] InfoLab, University of Southern California, 3710 S. McClintock Ave, RTH 323, Los Angeles, 90089, CA
[2] University of Southern California, 3737 Watt Way, PHE 306A, Los Angeles, 90089, CA
[3] Microsoft Corporation, 1 Microsoft Way, Redmond, 98052, WA
基金
美国国家科学基金会;
关键词
Crowdsourcing; Mobile crowdsourcing; Participatory sensing; Spatial crowdsourcing; Spatial task assignment;
D O I
10.1145/2729713
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
With the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks related to time and location) to a set of workers, which requires the workers to physically travel to those locations in order to perform the tasks. In this article, we focus on one class of spatial crowdsourcing, in which the workers send their locations to the server and thereafter the server assigns to every worker tasks in proximity to the worker's location with the aim of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space, including the spatial distribution and the travel cost of the workers. MTA is based on the assumptions that all tasks are of the same type and all workers are equally qualified in performing the tasks. Meanwhile, different types of tasks may require workers with various skill sets or expertise. Subsequently, we extend MTA by taking the expertise of the workers into consideration. We refer to this problem as the maximum score assignment (MSA) problem and show its practicality and generality. Extensive experiments with various synthetic and two real-world datasets show the applicability of our proposed framework. © 2015 ACM.
引用
收藏
相关论文
共 60 条
[1]  
Alonso O., Rose D.E., Stewart B., Crowdsourcing for relevance evaluation, ACM SigIR Forum, 42, pp. 9-15, (2008)
[2]  
Alt F., Shirazi A.S., Schmidt A., Kramer U., Nawaz Z., Location-based crowdsourcing: Extending crowdsourcing to the real world, Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, pp. 13-22, (2010)
[3]  
Amazon Mechanical Turk, (2005)
[4]  
Bozzon A., Brambilla M., Ceri S., Answering search queries with crowdsearcher, Proceedings of the 21st International Conference onWorldWideWeb, pp. 1009-1018, (2012)
[5]  
Bulut M.F., Yilmaz Y.S., Demirbas M., Crowdsourcing location-based queries 2011, IEEE InternationalConference onPervasiveComputing andCommunicationsWorkshops (PERCOM Workshops), pp. 513-518, (2011)
[6]  
Chen K.-T., Wu C.-C., Chang Y.-C., Lei C.-L., A crowdsourceable QoE evaluation framework for multimedia content, Proceedings of the 17th ACM International Conference on Multimedia, pp. 491-500, (2009)
[7]  
Cornelius C., Kapadia A., Kotz D., Peebles D., Shin M., Triandopoulos N., Anonysense: Privacy-aware people-centric sensing, Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, pp. 211-224, (2008)
[8]  
Cranshaw J., Toch E., Hong J., Kittur A., Sadeh N., Bridging the gap between physical location and online social networks, Proceedings of the 12th ACM International Conference on Ubiquitous Computing. ACM, New York, NY, pp. 119-128, (2010)
[9]  
Dang H., Nguyen T., To H., Maximum complex task assignment: Towards tasks correlation in spatial crowdsourcing, Proceedings of International Conference on Information Integration and Web-based Applications & Services, 77, (2013)
[10]  
Demartini G., Trushkowsky B., Kraska T., Franklin M.J., CrowdQ: Crowdsourced query understanding, CIDR, (2013)