Bayesian variable selection for high dimensional generalized linear models: Convergence rates of the fitted densities

被引:55
作者
Jiang, Wenxin [1 ]
机构
[1] Northwestern Univ, Dept Stat, Evanston, IL 60208 USA
关键词
convergence rates; generalized linear models; high dimensional data; posterior distribution; prior distribution; sparsity; variable selection;
D O I
10.1214/009053607000000019
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Bayesian variable selection has gained much empirical success recently in a variety of applications when the number K of explanatory variables (x(1),..., x(K)) is possibly much larger than the sample size a. For generalized linear models, if most of the x(j)'s have very small effects on the response y, we show that it is possible to use Bayesian variable selection to reduce overtitting caused by the curse of dimensionality K >> n. In this approach a suitable prior can be used to choose a few out of the many xj's to model y, so that the posterior will propose probability densities p that are "often close" to the true density p* in some sense. The closeness can be described by a Hellinger distance between p and p* that scales at a power very close to n(-1/2), which is the "finite-dimensional rate" corresponding to a low-dimensional situation. These findings extend some recent work of Jiang [Technical Report 05-02 (2005) Dept. Statistics, Northwestern Univ.] on consistency of Bayesian variable selection for binary classification.
引用
收藏
页码:1487 / 1511
页数:25
相关论文
共 27 条