Scalable Bayesian p-generalized probit and logistic regression

被引:0
作者
Ding, Zeyu [1 ,2 ]
Omlor, Simon [1 ,2 ]
Ickstadt, Katja [1 ,2 ]
Munteanu, Alexander [1 ]
机构
[1] TU Dortmund Univ, Fac Stat, D-44227 Dortmund, Germany
[2] Lamarr Inst Machine Learning & Artificial Intellig, D-44227 Dortmund, Germany
关键词
Generalized linear model; Bayesian regression; Coreset; Probit regression; Logistic regression; BINARY REGRESSION;
D O I
10.1007/s11634-024-00599-1
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
The logit and probit link functions are arguably the two most common choices for binary regression models. Many studies have extended the choice of link functions to avoid possible misspecification and to improve the model fit to the data. We introduce the p-generalized Gaussian distribution (p-GGD) to binary regression in a Bayesian framework. The p-GGD has received considerable attention due to its flexibility in modeling the tails, while generalizing, for instance, over the standard normal distribution where p=2 or the Laplace distribution where p=1 . Here, we extend from maximum likelihood estimation (MLE) to Bayesian posterior estimation using Markov Chain Monte Carlo (MCMC) sampling for the model parameters beta and the link function parameter p. We use simulated and real-world data to verify the effect of different parameters p on the estimation results, and how logistic regression and probit regression can be incorporated into a broader framework. To make our Bayesian methods scalable in the case of large data, we also incorporate coresets to reduce the data before running the complex and time-consuming MCMC analysis. This allows us to perform very efficient calculations while retaining the original posterior parameter distributions up to little distortions both, in practice, and with theoretical guarantees.
引用
收藏
页数:35
相关论文
共 55 条
  • [41] Munteanu A, 2021, PR MACH LEARN RES, V139
  • [42] Coresets-Methods and History: A Theoreticians Design Pattern for Approximation and Streaming Algorithms
    Munteanu, Alexander
    Schwiegelshohn, Chris
    [J]. KUNSTLICHE INTELLIGENZ, 2018, 32 (01): : 37 - 53
  • [43] Munteanu Alexander, 2018, Advances in Neural Information Processing Systems 31 (NeurIPS), P6562
  • [44] Sparsity information and regularization in the horseshoe and other shrinkage priors
    Piironen, Juho
    Vehtari, Aki
    [J]. ELECTRONIC JOURNAL OF STATISTICS, 2017, 11 (02): : 5018 - 5051
  • [45] Binomial Regression Models with a Flexible Generalized Logit Link Function
    Prasetyo, Rindang Bangun
    Kuswanto, Heri
    Iriawan, Nur
    Ulama, Brodjol Sutijo Suprih
    [J]. SYMMETRY-BASEL, 2020, 12 (02):
  • [46] LOGISTIC-REGRESSION DIAGNOSTICS
    PREGIBON, D
    [J]. ANNALS OF STATISTICS, 1981, 9 (04) : 705 - 724
  • [47] Speeding Up MCMC by Efficient Data Subsampling
    Quiroz, Matias
    Kohn, Robert
    Villani, Mattias
    Minh-Ngoc Tran
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2019, 114 (526) : 831 - 843
  • [48] Rabinovich M., 2015, ADV NEURAL INFORM PR, P1207
  • [49] Bayes and big data: the consensus Monte Carlo algorithm
    Scott, Steven L.
    Blocker, Alexander W.
    Bonassi, Fernando V.
    Chipman, Hugh A.
    George, Edward I.
    McCulloch, Robert E.
    [J]. INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE AND ENGINEERING MANAGEMENT, 2016, 11 (02) : 78 - 88
  • [50] Sohler C, 2011, ACM S THEORY COMPUT, P755