Sparse linear mixed model selection via streamlined variational Bayes

被引:1
|
作者
Degani, Emanuele [1 ]
Maestrini, Luca [2 ]
Toczydlowska, Dorota [3 ]
Wand, Matt P. [3 ]
机构
[1] Univ Padua, Dipartimento Sci Stat, Padua, Italy
[2] Stat Australian Natl Univ, Res Sch Finance, Actuarial Studies, Canberra, Australia
[3] Phys Sci Univ Technol Sydney, Sch Math, Sydney, Australia
来源
ELECTRONIC JOURNAL OF STATISTICS | 2022年 / 16卷 / 02期
基金
澳大利亚研究理事会;
关键词
Mean field variational Bayes; multilevel mod-els; longitudinal data analysis; fixed effects selection; global-local shrinkage priors; GENOME-WIDE ASSOCIATION; VARIABLE SELECTION; PRIOR DISTRIBUTIONS; EMPIRICAL BAYES; ADAPTIVE LASSO; REGRESSION; SHRINKAGE; HORSESHOE; INFERENCE; ESTIMATOR;
D O I
10.1214/22-EJS2063
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Linear mixed models are a versatile statistical tool to study data by accounting for fixed effects and random effects from multiple sources of variability. In many situations, a large number of candidate fixed effects is available and it is of interest to select a parsimonious subset of those being effectively relevant for predicting the response variable. Variational approximations facilitate fast approximate Bayesian inference for the pa-rameters of a variety of statistical models, including linear mixed models. However, for models having a high number of fixed or random effects, sim-ple application of standard variational inference principles does not lead to fast approximate inference algorithms, due to the size of model design matrices and inefficient treatment of sparse matrix problems arising from the required approximating density parameters updates. We illustrate how recently developed streamlined variational inference procedures can be generalized to make fast and accurate inference for the parameters of linear mixed models with nested random effects and global -local priors for Bayesian fixed effects selection. Our variational inference algorithms achieve convergence to the same optima of their standard imple-mentations, although with significantly lower computational effort, mem-ory usage and time, especially for large numbers of random effects. Using simulated and real data examples, we assess the quality of automated pro-cedures for fixed effects selection that are free from hyperparameters tun-ing and only rely upon variational posterior approximations. Moreover, we show high accuracy of variational approximations against model fitting via Markov Chain Monte Carlo sampling.
引用
收藏
页码:5182 / 5225
页数:44
相关论文
共 50 条
  • [1] Online model selection based on the variational bayes
    Sato, M
    NEURAL COMPUTATION, 2001, 13 (07) : 1649 - 1681
  • [2] Variational Bayes for High-Dimensional Linear Regression With Sparse Priors
    Ray, Kolyan
    Szabo, Botond
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2022, 117 (539) : 1270 - 1281
  • [3] Streamlined Variational Inference for Linear Mixed Models with Crossed Random Effects
    Menictas, Marianne
    Di Credico, Gioia
    Wand, Matt P.
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2023, 32 (01) : 99 - 115
  • [4] Bayesian adaptive lasso with variational Bayes for variable selection in high-dimensional generalized linear mixed models
    Dao Thanh Tung
    Minh-Ngoc Tran
    Tran Manh Cuong
    COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2019, 48 (02) : 530 - 543
  • [5] An Approximated Collapsed Variational Bayes Approach to Variable Selection in Linear Regression
    You, Chong
    Ormerod, John T.
    Li, Xiangyang
    Pang, Cheng Heng
    Zhou, Xiao-Hua
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2023, 32 (03) : 782 - 792
  • [6] SPARSE BAYESIAN LEARNING VIA VARIATIONAL BAYES FUSED WITH ORTHOGONAL MATCHING PURSUIT
    Shekaramiz, Mohammad
    Moon, Todd K.
    2022 INTERMOUNTAIN ENGINEERING, TECHNOLOGY AND COMPUTING (IETC), 2022,
  • [7] Sparse probit linear mixed model
    Stephan Mandt
    Florian Wenzel
    Shinichi Nakajima
    John Cunningham
    Christoph Lippert
    Marius Kloft
    Machine Learning, 2017, 106 : 1621 - 1642
  • [8] Sparse probit linear mixed model
    Mandt, Stephan
    Wenzel, Florian
    Nakajima, Shinichi
    Cunningham, John
    Lippert, Christoph
    Kloft, Marius
    MACHINE LEARNING, 2017, 106 (9-10) : 1621 - 1642
  • [9] Consistency of variational Bayes inference for estimation and model selection in mixtures
    Cherief-Abdellatif, Badr-Eddine
    Alquier, Pierre
    ELECTRONIC JOURNAL OF STATISTICS, 2018, 12 (02): : 2995 - 3035
  • [10] Hierarchical model selection for NGnet based on variational Bayes inference
    Yoshimoto, J
    Ishii, S
    Sato, M
    ARTIFICIAL NEURAL NETWORKS - ICANN 2002, 2002, 2415 : 661 - 666