Bayesian inference with the L1-ball prior: solving combinatorial problems with exact zeros

被引:4
作者
Xu, Maoran [1 ]
Duan, Leo L. [2 ,3 ]
机构
[1] Duke Univ, Dept Stat Sci, Durham, NC USA
[2] Univ Florida, Dept Stat, Gainesville, FL USA
[3] Univ Florida, Dept Stat, POB 118545, Gainesville, FL 32611 USA
关键词
cardinality; data augmentation; reversified projection; soft-thresholding; VARIABLE SELECTION; DECOMPOSITION; SHRINKAGE; MIXTURES; MONOTONE; MODELS; NUMBER; LASSO; NORM;
D O I
10.1093/jrsssb/qkad076
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
The l(1)-regularisation is very popular in high-dimensional statistics-it changes a combinatorial problem of choosing which subset of the parameter is zero, into a simple continuous optimisation. Using a continuous prior concentrated near zero, the Bayesian counterparts are successful in quantifying the uncertainty in the variable selection problems; nevertheless, the lack of exact zeros makes it difficult for broader problems such as change-point detection and rank selection. Inspired by the duality of the l(1)-regularisation as a constraint onto an l(1)-ball, we propose a new prior by projecting a continuous distribution onto the l(1)-ball. This creates a positive probability on the ball boundary, which contains both continuous elements and exact zeros. Unlike the spike-and-slab prior, this l(1)-ball projection is continuous and differentiable almost surely, making the posterior estimation amenable to the Hamiltonian Monte Carlo algorithm. We examine the properties, such as the volume change due to the projection, the connection to the combinatorial prior, the minimax concentration rate in the linear problem. We demonstrate the usefulness of exact zeros that simplify the combinatorial problems, such as the change-point detection in time series, the dimension selection of mixture models, and the low-rank plus-sparse change detection in medical images.
引用
收藏
页码:1538 / 1560
页数:23
相关论文
共 63 条
[1]  
Anderson Jr W. N., 1985, LINEAR MULTILINEAR A, V18, P141, DOI DOI 10.1080/03081088508817681
[2]  
[Anonymous], 2011, P ADV NEUR INF PROC
[3]  
[Anonymous], 2014, Geometric measure theory
[4]   GENERALIZED DOUBLE PARETO SHRINKAGE [J].
Armagan, Artin ;
Dunson, David B. ;
Lee, Jaeyong .
STATISTICA SINICA, 2013, 23 (01) :119-143
[5]   ON THE BETA PRIME PRIOR FOR SCALE PARAMETERS IN HIGH-DIMENSIONAL BAYESIAN REGRESSION MODELS [J].
Bai, Ray ;
Ghosh, Malay .
STATISTICA SINICA, 2021, 31 (02) :843-865
[6]  
Banerjee S., 2013, ARXIV
[7]  
Beck A, 2017, MOS-SIAM SER OPTIMIZ, P1, DOI 10.1137/1.9781611974997
[8]   Sparse Bayesian infinite factor models [J].
Bhattacharya, A. ;
Dunson, D. B. .
BIOMETRIKA, 2011, 98 (02) :291-306
[9]   Fast sampling with Gaussian scale mixture priors in high-dimensional regression [J].
Bhattacharya, Anirban ;
Chakraborty, Antik ;
Mallick, Bani K. .
BIOMETRIKA, 2016, 103 (04) :985-991
[10]   Dirichlet-Laplace Priors for Optimal Shrinkage [J].
Bhattacharya, Anirban ;
Pati, Debdeep ;
Pillai, Natesh S. ;
Dunson, David B. .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2015, 110 (512) :1479-1490