PAC-Bayesian Bounds for Randomized Empirical Risk Minimizers

被引:28
作者
Alquier, P. [1 ,2 ]
机构
[1] CREST, Paris, France
[2] Univ Paris 07, Lab Probab & Modeles Aleatoires, Paris, France
关键词
regression estimation; classification; adaptive inference; statistical learning; randomized estimator; empirical risk minimization; empirical bound;
D O I
10.3103/S1066530708040017
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
The aim of this paper is to generalize the PAC-Bayesian theorems proved by Catoni [6, 8] in the classification setting to more general problems of statistical inference. We show how to control the deviations of the risk of randomized estimators. A particular attention is paid to randomized estimators drawn in a small neighborhood of classical estimators, whose study leads to control of the risk of the latter. These results allow us to bound the risk of very general estimation procedures, as well as to perform model selection.
引用
收藏
页码:279 / 304
页数:26
相关论文
共 21 条
[1]  
Alquier P., 2006, THESIS
[2]  
Audibert J. Y., 2004, THESIS
[3]   Aggregated estimators and empirical complexity for least square regression [J].
Audibert, JY .
ANNALES DE L INSTITUT HENRI POINCARE-PROBABILITES ET STATISTIQUES, 2004, 40 (06) :685-736
[4]  
Bernstein S., 1927, THEORY PROBABILITY
[5]  
Catoni O., 2007, LECT NOTEMONOGRAPH, V56
[6]  
Catoni Olivier., 2003, PAC BAYESIAN APPROAC
[7]  
Catoni Olivier, 2001, LECT NOTES MATH
[8]  
Cervonenkis A.J., 1968, DOKL AKAD NAUK SSSR, V181
[9]   SUPPORT-VECTOR NETWORKS [J].
CORTES, C ;
VAPNIK, V .
MACHINE LEARNING, 1995, 20 (03) :273-297
[10]  
Devroye L., 1996, PROBABILISTIC THEORY