Learning high-dimensional parametric maps via reduced basis adaptive residual networks

被引:14
作者
O'Leary-Roseberry, Thomas [1 ]
Du, Xiaosong [2 ]
Chaudhuri, Anirban [1 ]
Martins, Joaquim R. R. A. [3 ]
Willcox, Karen [1 ,4 ]
Ghattas, Omar [1 ,5 ]
机构
[1] Univ Texas Austin, Oden Inst Computat Engn & Sci, 201 24th St, Austin, TX 78712 USA
[2] Missouri Univ Sci & Technol, Dept Mech & Aerosp Engn, 400 13th St, Rolla, MO 65409 USA
[3] Univ Michigan, Dept Aerosp Engn, 1320 Beal Ave, Ann Arbor, MI 48109 USA
[4] Univ Texas Austin, Dept Aerosp Engn & Engn Mech, 2617 Wichita St, C0600, Austin, TX 78712 USA
[5] Univ Texas Austin, Walker Dept Mech Engn, 204 Dean Keeton St, Austin, TX 78712 USA
关键词
Deep learning; Neural networks; Parametrized PDEs; Control flows; Residual networks; Adaptive surrogate construction; APPROXIMATION; FOUNDATIONS; REDUCTION; FRAMEWORK;
D O I
10.1016/j.cma.2022.115730
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively constructed residual network (ResNet) maps between reduced bases of the inputs and outputs. When just few training data are available, it is beneficial to have a compact parametrization in order to ameliorate the ill-posedness of the neural network training problem. By linearly restricting high-dimensional maps to informed reduced bases of the inputs, one can compress high-dimensional maps in a constructive way that can be used to detect appropriate basis ranks, equipped with rigorous error estimates. A scalable neural network learning framework is thus to learn the nonlinear compressed reduced basis mapping. Unlike the reduced basis construction, however, neural network constructions are not guaranteed to reduce errors by adding representation power, making it difficult to achieve good practical performance. Inspired by recent approximation theory that connects ResNets to sequential minimizing flows, we present an adaptive ResNet construction algorithm. This algorithm allows for depth-wise enrichment of the neural network approximation, in a manner that can achieve good practical performance by first training a shallow network and then adapting. We prove universal approximation of the associated neural network class for L2 nu functions on compact sets. Our overall framework allows for constructive means to detect appropriate breadth and depth, and related compact parametrizations of neural networks, significantly reducing the need for architectural hyperparameter tuning. Numerical experiments for parametric PDE problems and a 3D CFD wing design optimization parametric map demonstrate that the proposed methodology can achieve remarkably high accuracy for limited training data, and outperformed other neural network strategies we compared against.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:29
相关论文
共 58 条
  • [1] Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
  • [2] On Bayesian A- and D-Optimal Experimental Designs in Infinite Dimensions
    Alexanderian, Alen
    Gloor, Philip J.
    Ghattas, Omar
    [J]. BAYESIAN ANALYSIS, 2016, 11 (03): : 671 - 695
  • [3] Alfio Quarteroni, 2010, NUMERICAL MATH, V37, DOI 10.1007/b98885
  • [4] Verification and validation in computational engineering and science: basic concepts
    Babuska, I
    Oden, JT
    [J]. COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2004, 193 (36-38) : 4057 - 4066
  • [5] Babuska I., 2005, Internat. J. Numer. Anal. Model., V1, P1
  • [6] Bhattacharya K., 2021, SMAI J. Comput. Math., V7
  • [7] Blalock D, 2020, Arxiv, DOI arXiv:2003.03033
  • [8] Bollinger K., 2021, 2 ANN C MATH SCI MAC
  • [9] An Analysis of Infinite Dimensional Bayesian Inverse Shape Acoustic Scattering and Its Numerical Approximation
    Bui-Thanh, Tan
    Ghattas, Omar
    [J]. SIAM-ASA JOURNAL ON UNCERTAINTY QUANTIFICATION, 2014, 2 (01): : 203 - 222
  • [10] Cao LH, 2022, Arxiv, DOI arXiv:2210.03008