Multi-fidelity surrogate modeling using long short-term memory networks

被引:43
作者
Conti, Paolo [1 ]
Guo, Mengwu [2 ]
Manzoni, Andrea [3 ]
Hesthaven, Jan S. [4 ]
机构
[1] Politecn Milan, Dept Civil Engn, Milan, Italy
[2] Univ Twente, Dept Appl Math, Enschede, Netherlands
[3] Politecn Milan, Dept Math, MOX, Milan, Italy
[4] Ecole Polytech Fed Lausanne, Inst Math, Lausanne, Switzerland
关键词
Machine learning; Multi-fidelity regression; LSTM network; Parametrized PDE; Time-dependent problem; APPROXIMATION;
D O I
10.1016/j.cma.2022.115811
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
When evaluating quantities of interest that depend on the solutions to differential equations, we inevitably face the trade-off between accuracy and efficiency. Especially for parametrized, time-dependent problems in engineering computations, it is often the case that acceptable computational budgets limit the availability of high-fidelity, accurate simulation data. Multi-fidelity surrogate modeling has emerged as an effective strategy to overcome this difficulty. Its key idea is to leverage many low-fidelity simulation data, less accurate but much faster to compute, to improve the approximations with limited high-fidelity data. In this work, we introduce a novel data-driven framework of multi-fidelity surrogate modeling for parametrized, time-dependent problems using long short-term memory (LSTM) networks, to enhance output predictions both for unseen parameter values and forward in time simultaneously - a task known to be particularly challenging for data-driven models. We demonstrate the wide applicability of the proposed approaches in a variety of engineering problems with high-and low-fidelity data generated through fine versus coarse meshes, small versus large time steps, or finite element full order versus deep learning reduced-order models. Numerical results show that the proposed multi-fidelity LSTM networks not only improve single-fidelity regression significantly, but also outperform the multi-fidelity models based on feed-forward neural networks.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:22
相关论文
共 49 条
[21]   Multi-fidelity regression using artificial neural networks: Efficient approximation of parameter-dependent output quantities [J].
Guo, Mengwu ;
Manzoni, Andrea ;
Amendt, Maurice ;
Conti, Paolo ;
Hesthaven, Jan S. .
COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2022, 389
[22]   Solving high-dimensional partial differential equations using deep learning [J].
Han, Jiequn ;
Jentzen, Arnulf ;
Weinan, E. .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2018, 115 (34) :8505-8510
[23]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
[24]  
Howard AA, 2022, Arxiv, DOI [arXiv:2204.09157, 10.48550/ARXIV.2204.09157]
[25]  
Jain M., 2018, ARXIV
[26]   Physics-informed machine learning [J].
Karniadakis, George Em ;
Kevrekidis, Ioannis G. ;
Lu, Lu ;
Perdikaris, Paris ;
Wang, Sifan ;
Yang, Liu .
NATURE REVIEWS PHYSICS, 2021, 3 (06) :422-440
[27]   Predicting the output from a complex computer code when fast approximations are available [J].
Kennedy, MC ;
O'Hagan, A .
BIOMETRIKA, 2000, 87 (01) :1-13
[28]  
Khairy S, 2022, Arxiv, DOI [arXiv:2206.05165, DOI arXiv:2206.05165.v1]
[29]   Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders [J].
Lee, Kookjin ;
Carlberg, Kevin T. .
JOURNAL OF COMPUTATIONAL PHYSICS, 2020, 404
[30]  
Lu L, 2022, Arxiv, DOI arXiv:2204.06684