Iterative updating of model error for Bayesian inversion

被引:40
作者
Calvetti, Daniela [1 ]
Dunlop, Matthew [2 ]
Somersalo, Erkki [1 ]
Stuart, Andrew [2 ]
机构
[1] Case Western Reserve Univ, Dept Math Appl Math & Stat, 10900 Euclid Ave, Cleveland, OH 44106 USA
[2] Calif Inst Technol Comp & Math Sci, 1200 E Calif Blvd, Pasadena, CA 91125 USA
基金
英国工程与自然科学研究理事会; 美国国家科学基金会;
关键词
model discrepancy; discretization error; particle approximation; importance sampling; electrical impedance tomography; Darcy flow; APPROXIMATION ERRORS; CALIBRATION; REDUCTION;
D O I
10.1088/1361-6420/aaa34d
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.
引用
收藏
页数:38
相关论文
共 32 条
[1]   Importance Sampling: Intrinsic Dimension and Computational Cost [J].
Agapiou, S. ;
Papaspiliopoulos, O. ;
Sanz-Alonso, D. ;
Stuart, A. M. .
STATISTICAL SCIENCE, 2017, 32 (03) :405-431
[2]  
[Anonymous], HDB UNCERTAINTY QUAN
[3]  
[Anonymous], 2016, ARXIV160507811
[4]  
[Anonymous], 2016, ARXIV151200933
[5]   Approximation errors and model reduction with an application in optical diffusion tomography [J].
Arridge, SR ;
Kaipio, JP ;
Kolehmainen, V ;
Schweiger, M ;
Somersalo, E ;
Tarvainen, T ;
Vauhkonen, M .
INVERSE PROBLEMS, 2006, 22 (01) :175-195
[6]   IMPROVING THREE-DIMENSIONAL ELECTRICAL CAPACITANCE TOMOGRAPHY IMAGING USING APPROXIMATION ERROR MODEL THEORY [J].
Banasiak, R. ;
Ye, Z. ;
Soleimani, M. .
JOURNAL OF ELECTROMAGNETIC WAVES AND APPLICATIONS, 2012, 26 (2-3) :411-421
[7]   A framework for validation of computer models [J].
Bayarri, Maria J. ;
Berger, James O. ;
Paulo, Rui ;
Sacks, Jerry ;
Cafeo, John A. ;
Cavendish, James ;
Lin, Chin-Hsu ;
Tu, Jian .
TECHNOMETRICS, 2007, 49 (02) :138-154
[8]   Sequential Monte Carlo methods for Bayesian elliptic inverse problems [J].
Beskos, Alexandros ;
Jasra, Ajay ;
Muzaffer, Ege A. ;
Stuart, Andrew M. .
STATISTICS AND COMPUTING, 2015, 25 (04) :727-737
[9]   Learning about physical parameters: the importance of model discrepancy [J].
Brynjarsdottir, Jenny ;
O'Hagan, Anthony .
INVERSE PROBLEMS, 2014, 30 (11)
[10]  
Calvetti D., 2007, Introduction to Bayesian Scientific Computing