Convexity, classification, and risk bounds

被引:779
作者
Bartlett, PL [1 ]
Jordan, MI
McAuliffe, JD
机构
[1] Univ Calif Berkeley, Dept Stat, Berkeley, CA 94720 USA
[2] Univ Calif Berkeley, Div Comp Sci, Berkeley, CA 94720 USA
基金
美国国家科学基金会;
关键词
boosting; convex optimization; empirical process theory; machine learning; rademacher complexity; support vector machine;
D O I
10.1198/016214505000000907
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0-1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0-1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function-that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise. and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.
引用
收藏
页码:138 / 156
页数:19
相关论文
共 53 条
[1]  
[Anonymous], 1998, Encyclopedia of Biostatistics
[2]   The hardness of approximate optima in lattices, codes, and systems of linear equations [J].
Arora, S ;
Babai, L ;
Stern, J ;
Sweedyk, Z .
JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 1997, 54 (02) :317-331
[3]   Empirical minimization [J].
Bartlett, PL ;
Mendelson, S .
PROBABILITY THEORY AND RELATED FIELDS, 2006, 135 (03) :311-334
[4]   Local Rademacher complexities [J].
Bartlett, PL ;
Bousquet, O ;
Mendelson, S .
ANNALS OF STATISTICS, 2005, 33 (04) :1497-1537
[5]   The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network [J].
Bartlett, PL .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1998, 44 (02) :525-536
[6]  
Boser B. E., 1992, Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, P144, DOI 10.1145/130385.130401
[7]   A Bennett concentration inequality and its application to suprema of empirical processes [J].
Bousquet, O .
COMPTES RENDUS MATHEMATIQUE, 2002, 334 (06) :495-500
[8]  
Boyd S., 2004, CONVEX OPTIMIZATION
[9]  
Breiman L, 2004, ANN STAT, V32, P1
[10]   Prediction games and arcing algorithms [J].
Breiman, L .
NEURAL COMPUTATION, 1999, 11 (07) :1493-1517