Tradeoffs in Accuracy and Efficiency in Supervised Learning Methods

被引:39
作者
Collingwood, Loren [1 ]
Wilkerson, John [1 ]
机构
[1] Univ Washington, Dept Polit Sci, Box 353530,101 Gowen Hall, Seattle, WA 98195 USA
关键词
Machine learning; supervised learning; text classification;
D O I
10.1080/19331681.2012.669191
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
Words are an increasingly important source of data for social science research. Automated classification methodologies hold the promise of substantially lowering the costs of analyzing large amounts of text. In this article, we consider a number of questions of interest to prospective users of supervised learning methods, which are used to automatically classify events based on a pre-existing classification system. Although information scientists devote considerable attention to assessing the performance of different supervised learning algorithms and feature representations, the questions asked are often less directly relevant to the more practical concerns of social scientists. The first question prospective social science users are likely to ask is, How well do such methods work? The second is, How much human labeling effort is required? The third is, How do we assess whether virgin cases have been automatically classified with sufficient accuracy? We address these questions in the context of a particular dataset-the Congressional Bills Project-which includes more than 400,000 bill titles that humans have classified into 20 policy topics. This corpus offers an unusual opportunity to assess the performance of different algorithms, the impact of sample size, and the benefits of ensemble learning as a means for estimating classification accuracy.
引用
收藏
页码:298 / 318
页数:21
相关论文
共 27 条
[1]  
Berger AL, 1996, COMPUT LINGUIST, V22, P39
[2]   Latent Dirichlet allocation [J].
Blei, DM ;
Ng, AY ;
Jordan, MI .
JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) :993-1022
[3]  
Boser B. E., 1992, Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, P144, DOI 10.1145/130385.130401
[4]   Text Annotation for Political Science Research [J].
Cardie, Claire ;
Wilkerson, John .
JOURNAL OF INFORMATION TECHNOLOGY & POLITICS, 2008, 5 (01) :1-6
[5]  
Carpenter B., 2011, TEXT ANAL LING PIPE, V4
[6]  
Collingwood L., 2010, RTEXTTOOLS CLASSIFIE
[7]   SUPPORT-VECTOR NETWORKS [J].
CORTES, C ;
VAPNIK, V .
MACHINE LEARNING, 1995, 20 (03) :273-297
[8]   Overfitting and undercomputing in machine learning [J].
Dietterich, T .
ACM COMPUTING SURVEYS, 1995, 27 (03) :326-327
[9]   Measuring the Happiness of Large-Scale Written Expression: Songs, Blogs, and Presidents [J].
Dodds, Peter Sheridan ;
Danforth, Christopher M. .
JOURNAL OF HAPPINESS STUDIES, 2010, 11 (04) :441-456
[10]  
Grimmer J., 2009, 2009 AM POL SCI ASS