Assessing quality of volunteer crowdsourcing contributions: lessons from the Cropland Capture game

被引:44
作者
Salk, Carl F. [1 ,2 ]
Sturn, Tobias [1 ]
See, Linda [1 ]
Fritz, Steffen [1 ]
Perger, Christoph [1 ]
机构
[1] Int Inst Appl Syst Anal, Ecosyst Serv & Management Program, A-2361 Laxenburg, Austria
[2] Swedish Univ Agr Sci, Southern Swedish Forest Res Ctr, Alnarp, Sweden
基金
欧洲研究理事会;
关键词
crowdsourcing; volunteered geographic information; cropland; data quality; image classification; Geo-Wiki; GEOGRAPHIC INFORMATION; OPENSTREETMAP; ACCURACY;
D O I
10.1080/17538947.2015.1039609
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
Volunteered geographic information (VGI) is the assembly of spatial information based on public input. While VGI has proliferated in recent years, assessing the quality of volunteer-contributed data has proven challenging, leading some to question the efficiency of such programs. In this paper, we compare several quality metrics for individual volunteers' contributions. The data were the product of the Cropland Capture' game, in which several thousand volunteers assessed 165,000 images for the presence of cropland over the course of 6 months. We compared agreement between volunteer ratings and an image's majority classification with volunteer self-agreement on repeated images and expert evaluations. We also examined the impact of experience and learning on performance. Volunteer self-agreement was nearly always higher than agreement with majority classifications, and much greater than agreement with expert validations although these metrics were all positively correlated. Volunteer quality showed a broad trend toward improvement with experience, but the highest accuracies were achieved by a handful of moderately active contributors, not the most active volunteers. Our results emphasize the importance of a universal set of expert-validated tasks as a gold standard for evaluating VGI quality.
引用
收藏
页码:410 / 426
页数:17
相关论文
共 43 条
  • [1] Quality Control in Crowdsourcing Systems Issues and Directions
    Allahbakhsh, Mohammad
    Benatallah, Boualem
    Ignjatovic, Aleksandar
    Motahari-Nezhad, Hamid Reza
    Bertino, Elisa
    Dustdar, Schahram
    [J]. IEEE INTERNET COMPUTING, 2013, 17 (02) : 76 - 81
  • [2] [Anonymous], 1979, J R STAT SOC C-APPL, DOI 10.2307/2346806
  • [3] The emergence and evolution of OpenStreetMap: a cellular automata approach
    Arsanjani, Jamal Jokar
    Helbich, Marco
    Bakillah, Mohamed
    Loos, Lukas
    [J]. INTERNATIONAL JOURNAL OF DIGITAL EARTH, 2015, 8 (01) : 74 - 88
  • [4] Bachrach Y., 2012, 12066386 ARXIV
  • [5] Statistical solutions for error and bias in global citizen science datasets
    Bird, Tomas J.
    Bates, Amanda E.
    Lefcheck, Jonathan S.
    Hill, Nicole A.
    Thomson, Russell J.
    Edgar, Graham J.
    Stuart-Smith, Rick D.
    Wotherspoon, Simon
    Krkosek, Martin
    Stuart-Smith, Jemina F.
    Pecl, Gretta T.
    Barrett, Neville
    Frusher, Stewart
    [J]. BIOLOGICAL CONSERVATION, 2014, 173 : 144 - 154
  • [6] Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy
    Bonney, Rick
    Cooper, Caren B.
    Dickinson, Janis
    Kelling, Steve
    Phillips, Tina
    Rosenberg, Kenneth V.
    Shirk, Jennifer
    [J]. BIOSCIENCE, 2009, 59 (11) : 977 - 984
  • [7] Cade BS, 2003, FRONT ECOL ENVIRON, V1, P412, DOI 10.1890/1540-9295(2003)001[0412:AGITQR]2.0.CO
  • [8] 2
  • [9] A Photogrammetric Approach for Assessing Positional Accuracy of OpenStreetMap(C) Roads
    Canavosio-Zuzelski, Roberto
    Agouris, Peggy
    Doucette, Peter
    [J]. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2013, 2 (02): : 276 - 301
  • [10] Galaxy Zoo Volunteers Share Pain and Glory of Research
    Clery, Daniel
    [J]. SCIENCE, 2011, 333 (6039) : 173 - 175