"Garbage in, garbage out" revisited: What do machine learning application papers report about human-labeled training data?

被引:47
作者
Geiger, R. Stuart [1 ]
Cope, Dominique [2 ]
Ip, Jamie [2 ]
Lotosh, Marsha [3 ]
Shah, Aayush [2 ]
Weng, Jenny [2 ]
Tang, Rebekah [1 ]
机构
[1] Univ Calif San Diego, San Diego, CA 92103 USA
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
[3] Webster Pacific, San Francisco, CA USA
来源
QUANTITATIVE SCIENCE STUDIES | 2021年 / 2卷 / 03期
关键词
bias; data reporting; documentation; machine learning; research reporting; training data; INTERRATER RELIABILITY; ACADEMIC RESEARCH; SHARING RESEARCH; BIAS; STATEMENT; SENTIMENT;
D O I
10.1162/qss_a_00144
中图分类号
G25 [图书馆学、图书馆事业]; G35 [情报学、情报工作];
学科分类号
1205 ; 120501 ;
摘要
Supervised machine learning, in which models are automatically derived from labeled training data, is only as good as the quality of that data. This study builds on prior work that investigated to what extent "best practices" around labeling training data were followed in applied ML publications within a single domain (social media platforms). In this paper, we expand by studying publications that apply supervised ML in a far broader spectrum of disciplines, focusing on human-labeled data. We report to what extent a random sample of ML application papers across disciplines give specific details about whether best practices were followed, while acknowledging that a greater range of application fields necessarily produces greater diversity of labeling and annotation methods. Because much of machine learning research and education only focuses on what is done once a "ground truth" or "gold standard" of training data is available, it is especially relevant to discuss issues around the equally important aspect of whether such data is reliable in the first place. This determination becomes increasingly complex when applied to a variety of specialized fields, as labeling can range from a task requiring little-to-no background knowledge to one that must be performed by someone with career expertise.
引用
收藏
页数:33
相关论文
共 122 条
[1]   Point of View: Motivating participation in open science by examining researcher incentives [J].
Ali-Khan, Sarah E. ;
Harris, Liam W. ;
Gold, E. Richard .
ELIFE, 2017, 6
[2]   Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure [J].
Amini, Alexander ;
Soleimany, Ava P. ;
Schwarting, Wilko ;
Bhatia, Sangeeta N. ;
Rus, Daniela .
AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2019, :289-295
[3]  
Anderson J.R., 1976, A land use and land cover classification system for use with remote sensor data, P1
[4]  
[Anonymous], 2013, CROWDSEM 2013 WORKSH
[5]  
[Anonymous], 2018, ARXIV180807261
[6]  
[Anonymous], 2013, SPRINGER TEXTS STAT, DOI [DOI 10.1007/978-1-4614-7138-7, 10.1007/978-1-4614-7138-7]
[7]   Scopus as a curated, high-quality bibliometric data source for academic research in quantitative science studies [J].
Baas, Jeroen ;
Schotten, Michiel ;
Plume, Andrew ;
Cote, Gregoire ;
Karimi, Reza .
QUANTITATIVE SCIENCE STUDIES, 2020, 1 (01) :377-386
[8]  
Babbage C., 1864, Passages from the life of a philosophter
[9]   DATA MINING AND MACHINE LEARNING IN ASTRONOMY [J].
Ball, Nicholas M. ;
Brunner, Robert J. .
INTERNATIONAL JOURNAL OF MODERN PHYSICS D, 2010, 19 (07) :1049-1106
[10]  
Barclay I., 2019, ARXIV PREPRINT ARXIV