Collusion-Proof Result Inference in Crowdsourcing

被引:13
作者
Chen, Peng-Peng
Sun, Hai-Long [1 ]
Fang, Yi-Li [1 ]
Huai, Jin-Peng
机构
[1] Beihang Univ, Sch Comp Sci & Engn, State Key Lab Software Dev Environm, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
crowdsourcing; quality control; collusion; collaborative crowdsourcing; result inference;
D O I
10.1007/s11390-018-1823-6
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In traditional crowdsourcing, workers are expected to provide independent answers to tasks so as to ensure the diversity of answers. However, recent studies show that the crowd is not a collection of independent workers, but instead that workers communicate and collaborate with each other. To pursue more rewards with little effort, some workers may collude to provide repeated answers, which will damage the quality of the aggregated results. Nonetheless, there are few efforts considering the negative impact of collusion on result inference in crowdsourcing. In this paper, we are specially concerned with the Collusion-Proof result inference problem for general crowdsourcing tasks in public platforms. To that end, we design a metric, the worker performance change rate, to identify the colluded answers by computing the difference of the mean worker performance before and after removing the repeated answers. Then we incorporate the collusion detection result into existing result inference methods to guarantee the quality of the aggregated results even with the occurrence of collusion behaviors. With real-world and synthetic datasets, we conducted an extensive set of evaluations of our approach. The experimental results demonstrate the superiority of our approach in comparison with the state-of-the-art methods.
引用
收藏
页码:351 / 365
页数:15
相关论文
共 41 条
  • [1] Maintaining the collision of accounts: crowdsourcing sites in health care as brokers in the co-production of pharmaceutical knowledge
    Adams, Samantha A.
    [J]. INFORMATION COMMUNICATION & SOCIETY, 2014, 17 (06) : 657 - 669
  • [2] Ambati S., 2012, Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, P1191, DOI DOI 10.1145/2145204.2145382
  • [3] [Anonymous], 2009, Advances in Neural Information Processing Systems
  • [4] [Anonymous], 1979, J R STAT SOC C-APPL, DOI 10.2307/2346806
  • [5] [Anonymous], 2010, Proceedings of the ACM SIGKDD Workshop on Human Computation, noeth, DOI [10.1145/1837885.1837906, DOI 10.1145/1837885.1837906]
  • [6] Soylent: A Word Processor with a Crowd Inside
    Bernstein, Michael S.
    Little, Greg
    Miller, Robert C.
    Hartmann, Bjoern
    Ackerman, Mark S.
    Karger, David R.
    Crowell, David
    Panovich, Katrina
    [J]. COMMUNICATIONS OF THE ACM, 2015, 58 (08) : 85 - 94
  • [7] Assignment Techniques for Crowdsourcing Sensitive Tasks
    Celis, L. Elisa
    Reddy, Sai Praneeth
    Singh, Ishaan Preet
    Vaya, Shailesh
    [J]. ACM CONFERENCE ON COMPUTER-SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING (CSCW 2016), 2016, : 836 - 847
  • [8] Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets
    Chang, Joseph Chee
    Amershi, Saleema
    Kamar, Ece
    [J]. PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17), 2017, : 2334 - 2346
  • [9] Chen L, 2015, PROC INT CONF DATA, P1527, DOI 10.1109/ICDE.2015.7113418
  • [10] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848