Crowdsourcing Statement Classification to Enhance Information Quality Prediction

被引:0
作者
Singh, Jaspreet [1 ]
Soprano, Michael [2 ]
Roitero, Kevin [2 ]
Ceolin, Davide [3 ]
机构
[1] Vrije Univ Amsterdam, Amsterdam, Netherlands
[2] Univ Udine, Udine, Italy
[3] CWI, Amsterdam, Netherlands
来源
DISINFORMATION IN OPEN ONLINE MEDIA, MISDOOM 2024 | 2024年 / 15175卷
基金
荷兰研究理事会;
关键词
Crowdsourcing Annotation; Information Quality Assessment; Argument Type Identification; RELIABILITY; AGREEMENT;
D O I
10.1007/978-3-031-71210-4_5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper explores the use of crowdsourcing to classify statement types in film reviews to assess their information quality. Employing the Argument Type Identification Procedure which uses the Periodic Table of Arguments to categorize arguments, the study aims to connect statement types to the overall argument strength and information reliability. Focusing on non-expert annotators in a crowdsourcing environment, the research assesses their reliability based on various factors including language proficiency and annotation experience. Results indicate the importance of careful annotator selection and training to achieve high inter-annotator agreement and highlight challenges in crowdsourcing statement classification for information quality assessment.
引用
收藏
页码:70 / 85
页数:16
相关论文
共 28 条
  • [1] Addawood A., 2016, P 3 WORKSH ARG MIN A, DOI [DOI 10.18653/V1/W16-2801, 10.18653/v1/w16-2801, 10.18653/v1/W16-2801]
  • [2] Bosc T, 2016, LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, P1258
  • [3] Transparent assessment of information quality of online reviews using formal argumentation theory
    Ceolin, Davide
    Primiero, Giuseppe
    Soprano, Michael
    Wielemaker, Jan
    [J]. INFORMATION SYSTEMS, 2022, 110
  • [4] A COEFFICIENT OF AGREEMENT FOR NOMINAL SCALES
    COHEN, J
    [J]. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 1960, 20 (01) : 37 - 46
  • [5] Dancey CP., 2004, STAT MATHS PSYCHOL U, VThird
  • [6] Feier A., Reach consensus faster by using IAA charts in the annotation lab
  • [7] FLEISS JL, 1971, PSYCHOL BULL, V76, P378, DOI 10.1037/h0031619
  • [8] Goudas T, 2014, LECT NOTES ARTIF INT, V8445, P287, DOI 10.1007/978-3-319-07064-3_23
  • [9] Agreement, the F-measure, and reliability in information retrieval
    Hripcsak, G
    Rothschild, AS
    [J]. JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2005, 12 (03) : 296 - 298
  • [10] Argument Mining in Tweets: Comparing Crowd and Expert Annotations for Automated Claim and Evidence Detection
    Iskender, Neslihan
    Schaefer, Robin
    Polzehl, Tim
    Moeller, Sebastian
    [J]. NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2021), 2021, 12801 : 275 - 288