Interpreting and generalizing deep learning in physics-based problems with functional linear models

被引:2
作者
Arzani, Amirhossein [1 ,2 ]
Yuan, Lingxiao [3 ]
Newell, Pania [1 ]
Wang, Bei [2 ,4 ]
机构
[1] Univ Utah, Dept Mech Engn, Salt Lake City, UT 84112 USA
[2] Univ Utah, Sci Comp & Imaging Inst, Salt Lake City, UT 84112 USA
[3] Boston Univ, Dept Mech Engn, Boston, MA USA
[4] Univ Utah, Sch Comp, Salt Lake City, UT USA
关键词
Explainable artificial intelligence (XAI); Scientific machine learning; Functional data analysis; Operator learning; Generalization; BANDWIDTH SELECTION;
D O I
10.1007/s00366-024-01987-z
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Although deep learning has achieved remarkable success in various scientific machine learning applications, its opaque nature poses concerns regarding interpretability and generalization capabilities beyond the training data. Interpretability is crucial and often desired in modeling physical systems. Moreover, acquiring extensive datasets that encompass the entire range of input features is challenging in many physics-based learning tasks, leading to increased errors when encountering out-of-distribution (OOD) data. In this work, motivated by the field of functional data analysis (FDA), we propose generalized functional linear models as an interpretable surrogate for a trained deep learning model. We demonstrate that our model could be trained either based on a trained neural network (post-hoc interpretation) or directly from training data (interpretable operator learning). A library of generalized functional linear models with different kernel functions is considered and sparse regression is used to discover an interpretable surrogate model that could be analytically presented. We present test cases in solid mechanics, fluid mechanics, and transport. Our results demonstrate that our model can achieve comparable accuracy to deep learning and can improve OOD generalization while providing more transparency and interpretability. Our study underscores the significance of interpretable representation in scientific machine learning and showcases the potential of functional linear models as a tool for interpreting and generalizing deep learning.
引用
收藏
页码:135 / 157
页数:23
相关论文
共 92 条
[21]   Neural network training using l1-regularization and bi-fidelity data [J].
De, Subhayan ;
Doostan, Alireza .
JOURNAL OF COMPUTATIONAL PHYSICS, 2022, 458
[22]   ON TRANSFER LEARNING OF NEURAL NETWORKS USING BI-FIDELITY DATA FOR UNCERTAINTY PROPAGATION [J].
De, Subhayan ;
Britton, Jolene ;
Reynolds, Matthew ;
Skinner, Ryan ;
Jansen, Kenneth ;
Doostan, Alireza .
INTERNATIONAL JOURNAL FOR UNCERTAINTY QUANTIFICATION, 2020, 10 (06) :543-573
[23]   The detection of surface vibrations from interior acoustical pressure [J].
DeLillo, T ;
Isakov, V ;
Valdivia, N ;
Wang, LJ .
INVERSE PROBLEMS, 2003, 19 (03) :507-524
[24]  
DeVries Terrance, 2018, arXiv
[25]  
Duffy D. G., 2001, ST ADV MATH
[27]   Data driven turbulence modeling in turbomachinery - An applicability study [J].
Fang, L. ;
Bao, T. W. ;
Xu, W. Q. ;
Zhou, Z. D. ;
Du, J. L. ;
Jin, Y. .
COMPUTERS & FLUIDS, 2022, 238
[28]   Super-resolution and denoising of 4D-Flow MRI using physics-Informed deep neural nets [J].
Fathi, Mojtaba F. ;
Perez-Raya, Isaac ;
Baghaie, Ahmadreza ;
Berg, Philipp ;
Janiga, Gabor ;
Arzani, Amirhossein ;
D'Souza, Roshan M. .
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2020, 197
[29]  
Ferraty F., 2011, OXFORD HDB FUNCTIONA
[30]   Unsupervised discovery of interpretable hyperelastic constitutive laws [J].
Flaschel, Moritz ;
Kumar, Siddhant ;
Lorenzis, Laura De .
COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2021, 381