Solving and learning nonlinear PDEs with Gaussian processes

被引:65
|
作者
Chen, Yifan [1 ]
Hosseini, Bamdad [1 ]
Owhadi, Houman [1 ]
Stuart, Andrew M. [1 ]
机构
[1] CALTECH, Comp & Math Sci, Pasadena, CA 91125 USA
关键词
Kernel methods; Gaussian processes; Nonlinear partial differential equations; Inverse problems; Optimal recovery; INVERSE PROBLEMS; MODEL IDENTIFICATION; PARAMETER-ESTIMATION; NEURAL-NETWORKS; APPROXIMATION; ALGORITHM; FRAMEWORK;
D O I
10.1016/j.jcp.2021.110668
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We introduce a simple, rigorous, and unified framework for solving nonlinear partial differential equations (PDEs), and for solving inverse problems (IPs) involving the identification of parameters in PDEs, using the framework of Gaussian processes. The proposed approach: (1) provides a natural generalization of collocation kernel methods to nonlinear PDEs and IPs; (2) has guaranteed convergence for a very general class of PDEs, and comes equipped with a path to compute error bounds for specific PDE approximations; (3) inherits the state-of-the-art computational complexity of linear solvers for dense kernel matrices. The main idea of our method is to approximate the solution of a given PDE as the maximum a posteriori (MAP) estimator of a Gaussian process conditioned on solving the PDE at a finite number of collocation points. Although this optimization problem is infinite-dimensional, it can be reduced to a finite-dimensional one by introducing additional variables corresponding to the values of the derivatives of the solution at collocation points; this generalizes the representer theorem arising in Gaussian process regression. The reduced optimization problem has the form of a quadratic objective function subject to nonlinear constraints; it is solved with a variant of the Gauss-Newton method. The resulting algorithm (a) can be interpreted as solving successive linearizations of the nonlinear PDE, and (b) in practice is found to converge in a small number of iterations (2 to 10), for a wide range of PDEs. Most traditional approaches to IPs interleave parameter updates with numerical solution of the PDE; our algorithm solves for both parameter and PDE solution simultaneously. Experiments on nonlinear elliptic PDEs, Burgers' equation, a regularized Eikonal equation, and an IP for permeability identification in Darcy flow illustrate the efficacy and scope of our framework. (C) 2021 Elsevier Inc. All rights reserved.
引用
收藏
页数:29
相关论文
共 50 条
  • [21] Deep learning approximations for non-local nonlinear PDEs with Neumann boundary conditions
    Boussange, Victor
    Becker, Sebastian
    Jentzen, Arnulf
    Kuckuck, Benno
    Pellissier, Loic
    PARTIAL DIFFERENTIAL EQUATIONS AND APPLICATIONS, 2023, 4 (06):
  • [22] Optimism in Active Learning with Gaussian Processes
    Collet, Timothe
    Pietquin, Olivier
    NEURAL INFORMATION PROCESSING, PT II, 2015, 9490 : 152 - 160
  • [23] Error analysis of kernel/GP methods for nonlinear and parametric PDEs
    Batlle, Pau
    Chen, Yifan
    Hosseini, Bamdad
    Owhadi, Houman
    Stuart, Andrew M.
    JOURNAL OF COMPUTATIONAL PHYSICS, 2024, 520
  • [24] Nonlinear response modelling of material systems using constrained Gaussian processes
    Herath, Sumudu
    Chakraborty, Souvik
    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, 2024, 125 (14)
  • [25] Nonlinear channel equalization with Gaussian processes for regression
    Perez-Cruz, Fernando
    Murillo-Fuentes, Juan Jose
    Caro, Sebastian
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2008, 56 (10) : 5283 - 5286
  • [26] Multi-View Representation Learning With Deep Gaussian Processes
    Sun, Shiliang
    Dong, Wenbo
    Liu, Qiuyang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (12) : 4453 - 4468
  • [27] Machine learning of linear differential equations using Gaussian processes
    Raissi, Maziar
    Perdikaris, Paris
    Karniadakis, George Em
    JOURNAL OF COMPUTATIONAL PHYSICS, 2017, 348 : 683 - 693
  • [28] Policy learning in continuous-time Markov decision processes using Gaussian Processes
    Bartocci, Ezio
    Bortolussi, Luca
    Brazdil, Tomas
    Milios, Dimitrios
    Sanguinetti, Guido
    PERFORMANCE EVALUATION, 2017, 116 : 84 - 100
  • [29] Fast approximate learning-based multistage nonlinear model predictive control using Gaussian processes and deep neural networks
    Bonzanini, Angelo D.
    Paulson, Joel A.
    Makrygiorgos, Georgios
    Mesbah, Ali
    COMPUTERS & CHEMICAL ENGINEERING, 2021, 145
  • [30] Mosaic flows: A transferable deep learning framework for solving PDEs on unseen domains
    Wang, Hengjie
    Planas, Robert
    Chandramowlishwaran, Aparna
    Bostanabad, Ramin
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2022, 389