Accelerating Markov Chain Monte Carlo Simulation by Differential Evolution with Self-Adaptive Randomized Subspace Sampling

被引:875
作者
Vrugt, Jasper A. [1 ]
ter Braak, C. J. F. [2 ]
Diks, C. G. H. [3 ]
Robinson, Bruce A. [4 ]
Hyman, James M. [5 ]
Higdon, Dave [6 ]
机构
[1] Los Alamos Natl Lab, Ctr Nonlinear Studies, Los Alamos, NM 87545 USA
[2] Univ Wageningen & Res Ctr, NL-6700 AC Wageningen, Netherlands
[3] Univ Amsterdam, Ctr Nonlinear Dynam Econ & Finance, Amsterdam, Netherlands
[4] Los Alamos Natl Lab, Civilian Nucl Program Off SPO CNP, Los Alamos, NM 87545 USA
[5] Los Alamos Natl Lab, Math Modeling & Anal Grp T7, Los Alamos, NM 87545 USA
[6] Los Alamos Natl Lab, Stat Sci CCS6, Los Alamos, NM 87545 USA
关键词
MCMC; Markov chain Monte Carlo; RWM; random walk metropolis; DE-MC; differential evolution Markov chain; DRAM; delayed rejection adaptive Metropolis; DREAM; differential evolution adaptive metropolis; SCE-UA; shuffled complex evolution - university of Arizona; METROPOLIS ALGORITHM; OPTIMIZATION; CONVERGENCE; UNCERTAINTY; MIGRATION;
D O I
10.1515/IJNSNS.2009.10.3.273
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well-constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled Differential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally Superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to Complex, multi-modal search problems.
引用
收藏
页码:273 / 290
页数:18
相关论文
共 39 条
[1]   Maximum likelihood estimation of a migration matrix and effective population sizes in n subpopulations by using a coalescent approach [J].
Beerli, P ;
Felsenstein, J .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2001, 98 (08) :4563-4568
[2]  
Box G.E.P., 2011, Bayesian inference in statistical analysis
[3]   Improving convergence of the Hastings-Metropolis algorithm with an adaptive proposal [J].
Chauveau, D ;
Vandekerkhove, P .
SCANDINAVIAN JOURNAL OF STATISTICS, 2002, 29 (01) :13-29
[4]   Markov chain Monte Carlo simulation methods in econometrics [J].
Chib, S ;
Greenberg, E .
ECONOMETRIC THEORY, 1996, 12 (03) :409-431
[5]   Sequential Monte Carlo samplers [J].
Del Moral, Pierre ;
Doucet, Arnaud ;
Jasra, Ajay .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2006, 68 :411-436
[6]   EFFECTIVE AND EFFICIENT GLOBAL OPTIMIZATION FOR CONCEPTUAL RAINFALL-RUNOFF MODELS [J].
DUAN, QY ;
SOROOSHIAN, S ;
GUPTA, V .
WATER RESOURCES RESEARCH, 1992, 28 (04) :1015-1031
[7]   Speed-up of Monte Carlo simulations by sampling of rejected states [J].
Frenkel, D .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2004, 101 (51) :17571-17575
[8]  
Frenkel D., 2002, UNDERSTANDING MOL SI, V2nd edn
[9]  
Gelfand A.E., 1990, J AM STAT ASSOC, V85, P398409
[10]  
Gelman A., 1996, Bayesian Statistics, V5, P599, DOI DOI 10.1093/OSO/9780198523567.003.0038