Extreme theory of functional connections: A fast physics-informed neural network method for solving ordinary and partial differential equations

被引:117
作者
Schiassi, Enrico [1 ]
Furfaro, Roberto [1 ,2 ]
Leake, Carl [3 ]
De Florio, Mario [1 ]
Johnston, Hunter [3 ]
Mortari, Daniele [3 ]
机构
[1] Univ Arizona, Dept Syst & Ind Engn, Tucson, AZ 85721 USA
[2] Univ Arizona, Dept Aerosp & Mech Engn, Tucson, AZ 85721 USA
[3] Texas A&M Univ, Dept Aerosp Engn, College Stn, TX 77843 USA
关键词
Physics-informed neural networks; Extreme learning machine; Functional interpolation; Numerical methods; Universal approximator; Least-squares; UNIVERSAL APPROXIMATION; NUMERICAL-SOLUTION; LEARNING-MACHINE; MODEL; REGOLITHS; FRAMEWORK;
D O I
10.1016/j.neucom.2021.06.015
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a novel, accurate, fast, and robust physics-informed neural network method for solving problems involving differential equations (DEs), called Extreme Theory of Functional Connections, or X-TFC. The proposed method is a synergy of two recently developed frameworks for solving problems involving DEs: the Theory of Functional Connections TFC, and the Physics-Informed Neural Networks PINN. Here, the latent solution of the DEs is approximated by a TFC constrained expression that employs a Neural Network (NN) as the free-function. The TFC approximated solution form always analytically satisfies the constraints of the DE, while maintaining a NN with unconstrained parameters. X-TFC uses a single-layer NN trained via the Extreme Learning Machine (ELM) algorithm. This choice is based on the approximating properties of the ELM algorithm that reduces the training of the network to a simple least-squares, because the only trainable parameters are the output weights. The proposed methodology was tested over a wide range of problems including the approximation of solutions to linear and nonlinear ordinary DEs (ODEs), systems of ODEs, and partial DEs (PDEs). The results show that, for most of the problems considered, XTFC achieves high accuracy with low computational time, even for large scale PDEs, without suffering the curse of dimensionality. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:334 / 356
页数:23
相关论文
共 70 条
[1]  
[Anonymous], 2018, ILO WORKING PAPERS
[2]  
[Anonymous], 1997, Numerical analysis of spectral meth- ods: Theory and applications
[3]  
Argyris J.H., 1954, Aircr. Eng. Aerosp. Technol, V26, P347, DOI [10.1108/eb032482, DOI 10.1108/EB032482]
[4]  
Baydin A.G., 2015, AUTOMATIC DIFFERENTI
[5]   A unified deep artificial neural network approach to partial differential equations in complex geometries [J].
Berg, Jens ;
Nystrom, Kaj .
NEUROCOMPUTING, 2018, 317 :28-41
[6]  
Carroll S. M., 1989, IJCNN: International Joint Conference on Neural Networks (Cat. No.89CH2765-6), P607, DOI 10.1109/IJCNN.1989.118639
[7]   APPROXIMATIONS OF CONTINUOUS FUNCTIONALS BY NEURAL NETWORKS WITH APPLICATION TO DYNAMIC-SYSTEMS [J].
CHEN, TP ;
CHEN, H .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (06) :910-918
[8]   UNIVERSAL APPROXIMATION TO NONLINEAR OPERATORS BY NEURAL NETWORKS WITH ARBITRARY ACTIVATION FUNCTIONS AND ITS APPLICATION TO DYNAMICAL-SYSTEMS [J].
CHEN, TP ;
CHEN, H .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1995, 6 (04) :911-917
[9]   Solution of Ruin Probability for Continuous Time Model Based on Block Trigonometric Exponential Neural Network [J].
Chen, Yinghao ;
Yi, Chun ;
Xie, Xiaoliang ;
Hou, Muzhou ;
Cheng, Yangjin .
SYMMETRY-BASEL, 2020, 12 (06)
[10]  
Clough R.W., 1960, P 2 ASCE C EL COMP P