A Deep Learning Method for Comparing Bayesian Hierarchical Models

被引:1
作者
Elsemueller, Lasse [1 ]
Schnuerch, Martin [2 ]
Buerkner, Paul-Christian [3 ]
Radev, Stefan T. [4 ]
机构
[1] Heidelberg Univ, Inst Psychol, Hauptstr 47, D-69117 Heidelberg, Germany
[2] Univ Mannheim, Dept Psychol, Mannheim, Germany
[3] TU Dortmund Univ, Dept Stat, Dortmund, Germany
[4] Heidelberg Univ, Cluster Excellence STRUCTURES, Heidelberg, Germany
关键词
Bayesian statistics; model comparison; hierarchical modeling; deep learning; cognitive modeling; PROCESSING TREE MODELS; MONTE-CARLO; NORMALIZING CONSTANTS; CHOICE;
D O I
10.1037/met0000645
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Bayesian model comparison (BMC) offers a principled approach to assessing the relative merits of competing computational models and propagating uncertainty into model selection decisions. However, BMC is often intractable for the popular class of hierarchical models due to their high-dimensional nested parameter structure. To address this intractability, we propose a deep learning method for performing BMC on any set of hierarchical models which can be instantiated as probabilistic programs. Since our method enables amortized inference, it allows efficient re-estimation of posterior model probabilities and fast performance validation prior to any real-data application. In a series of extensive validation studies, we benchmark the performance of our method against the state-of-the-art bridge sampling method and demonstrate excellent amortized inference across all BMC settings. We then showcase our method by comparing four hierarchical evidence accumulation models that have previously been deemed intractable for BMC due to partly implicit likelihoods. Additionally, we demonstrate how transfer learning can be leveraged to enhance training efficiency. We provide reproducible code for all analyses and an open-source implementation of our method.
引用
收藏
页数:30
相关论文
共 110 条
  • [61] Marin J.-M., 2018, Likelihood-free model choice
  • [62] Markov chain Monte Carlo without likelihoods
    Marjoram, P
    Molitor, J
    Plagnol, V
    Tavaré, S
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2003, 100 (26) : 15324 - 15328
  • [63] McElreath R, 2016, TEXT STAT SCI, pXI
  • [64] Meng XL, 1996, STAT SINICA, V6, P831
  • [65] Warp bridge sampling
    Meng, XL
    Schilling, S
    [J]. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2002, 11 (03) : 552 - 586
  • [66] ABrox A-user-friendly Python']Python module for approximate Bayesian computation with a focus on model comparison
    Mertens, Ulf Kai
    Voss, Andreas
    Radev, Stefan
    [J]. PLOS ONE, 2018, 13 (03):
  • [67] Prepaid parameter estimation without likelihoods
    Mestdagh, Merijn
    Verdonck, Stijn
    Meers, Kristof
    Loossens, Tim
    Tuerlinckx, Francis
    [J]. PLOS COMPUTATIONAL BIOLOGY, 2019, 15 (09)
  • [68] Optimal Experimental Design for Model Discrimination
    Myung, Jay I.
    Pitt, Mark A.
    [J]. PSYCHOLOGICAL REVIEW, 2009, 116 (03) : 499 - 518
  • [69] Naeini MP, 2015, AAAI CONF ARTIF INTE, P2901
  • [70] Nicenboim B., 2022, An introduction to Bayesian data analysis for cognitive science