Randomized physics-informed machine learning for uncertainty quantification in high-dimensional inverse problems

被引:0
作者
Zong, Yifei [1 ]
Barajas-Solano, David [2 ]
Tartakovsky, Alexandre M. [1 ,2 ]
机构
[1] Univ Illinois, Dept Civil & Environm Engn, Urbana, IL 61801 USA
[2] Pacific Northwest Natl Lab, Richland, WA 99352 USA
基金
美国国家科学基金会;
关键词
Bayesian inference; Hamiltonian Monte Carlo; Inverse problems; Uncertainty quantification; Dimension reduction; Physics-informed machine learning; MONTE-CARLO; FLOW; HYDROGEOLOGY; PARAMETERS; ENSEMBLE;
D O I
10.1016/j.jcp.2024.113395
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We propose the randomized physics-informed conditional Karhunen-Lo & egrave;ve expansion (rPICKLE) method for uncertainty quantification in high-dimensional inverse problems. In rPICKLE, the states and parameters of the governing partial differential equation (PDE) are approximated via truncated conditional Karhunen-Lo & egrave;ve expansions (cKLEs). Uncertainty in the inverse solution is quantified via the posterior distribution of cKLE coefficients formulated with independent standard normal priors and a likelihood containing PDE residuals evaluated over the computational domain. The maximum a posteriori (MAP) estimate of the cKLE coefficients is found by minimizing a loss function given (up to a constant) by the negative log posterior. The posterior is sampled by adding zero-mean Gaussian noises into the MAP loss function and minimizing the loss for different noise realizations. For linear and low-dimensional nonlinear problems, we show that the rPICKLE posterior converges to the true Bayesian posterior. For high-dimensional non-linear problems, we obtain rPICKLE posterior approximations with high log-predictive probability. For a low-dimensional problem, the traditional Hamiltonian Monte Carlo (HMC) and Stein Variational Gradient Descent (SVGD) methods yield similar (to rPICKLE) posteriors. However, both HMC and SVGD fail for the high-dimensional problem. These results demonstrate the advantages of rPICKLE for approximately sampling high-dimensional posterior distributions.
引用
收藏
页数:21
相关论文
共 51 条
[1]   A review of uncertainty quantification in deep learning: Techniques, applications and challenges [J].
Abdar, Moloud ;
Pourpanah, Farhad ;
Hussain, Sadiq ;
Rezazadegan, Dana ;
Liu, Li ;
Ghavamzadeh, Mohammad ;
Fieguth, Paul ;
Cao, Xiaochun ;
Khosravi, Abbas ;
Acharya, U. Rajendra ;
Makarenkov, Vladimir ;
Nahavandi, Saeid .
INFORMATION FUSION, 2021, 76 :243-297
[2]  
Anderson MP, 2015, APPLIED GROUNDWATER MODELING: SIMULATION OF FLOW AND ADVECTIVE TRANSPORT, 2ND EDITION, P1
[3]  
Ba J., 2021, INT C LEARN REPR
[4]   RANDOMIZE-THEN-OPTIMIZE: A METHOD FOR SAMPLING FROM POSTERIOR DISTRIBUTIONS IN NONLINEAR INVERSE PROBLEMS [J].
Bardsley, Johnathan M. ;
Solonen, Antti ;
Haario, Heikki ;
Laine, Marko .
SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2014, 36 (04) :A1895-A1910
[5]   Optimal tuning of the hybrid Monte Carlo algorithm [J].
Beskos, Alexandros ;
Pillai, Natesh ;
Roberts, Gareth ;
Sanz-Serna, Jesus-Maria ;
Stuart, Andrew .
BERNOULLI, 2013, 19 (5A) :1501-1534
[6]  
Betancourt MJ, 2015, Arxiv, DOI arXiv:1411.6669
[7]  
Betancourt M, 2018, Arxiv, DOI [arXiv:1701.02434, DOI 10.48550/ARXIV.1701.02434]
[8]   Variational Inference: A Review for Statisticians [J].
Blei, David M. ;
Kucukelbir, Alp ;
McAuliffe, Jon D. .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2017, 112 (518) :859-877
[9]   A subspace, interior, and conjugate gradient method for large-scale bound-constrained minimization problems [J].
Branch, MA ;
Coleman, TF ;
Li, YY .
SIAM JOURNAL ON SCIENTIFIC COMPUTING, 1999, 21 (01) :1-23
[10]  
Brooks SP, 1998, J ROY STAT SOC D-STA, V47, P69, DOI 10.1111/1467-9884.00117