Measuring the Quality of Annotations for a Subjective Crowdsourcing Task

被引:3
作者
Justo, Raquel [1 ]
Ines Torres, M. [1 ]
Alcaide, Jose M. [1 ]
机构
[1] Univ Pais Vasco UPV EHU, Sarriena S-N, Leioa 48940, Spain
来源
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2017) | 2017年 / 10255卷
关键词
Supervised learning; Annotation; Crowdsourcing; Subjective language; AGREEMENT;
D O I
10.1007/978-3-319-58838-4_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work an algorithm devoted to the detection of low quality annotations is proposed. It is mainly focused on subjective annotation tasks carried out by means of crowdsourcing platforms. In this kind of task, where a good response is not necessarily prefixed, several measures should be considered in order to pick the different behaviours of annotators associated to bad quality results: time, inter-annotator agreement and repeated patterns in responses. The proposed algorithm considers all these measures and provide a set of workers whose annotations should be removed. The experiments carried out, over a sarcasm annotation task, show that once the low quality annotations were removed and acquired again a better labeled set was achieved.
引用
收藏
页码:58 / 68
页数:11
相关论文
共 24 条