Machine learning strategies for systems with invariance properties

被引:274
作者
Ling, Julia [1 ]
Jones, Reese [1 ]
Templeton, Jeremy [1 ]
机构
[1] Sandia Natl Labs, 7011 East Ave, Livermore, CA 94550 USA
关键词
Machine learning; Turbulence models; Constitutive models; Tensor invariants; NEURAL-NETWORK; CONSTITUTIVE MODEL; HYPERELASTIC MATERIAL;
D O I
10.1016/j.jcp.2016.05.003
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:22 / 35
页数:14
相关论文
共 65 条
[1]   An investigation of wall-anisotropy expressions and length-scale equations for non-linear eddy-viscosity models [J].
Abe, K ;
Jang, YJ ;
Leschziner, MA .
INTERNATIONAL JOURNAL OF HEAT AND FLUID FLOW, 2003, 24 (02) :181-198
[2]  
Baird H., 1992, STRUCTURE DOCUMENT I
[3]  
Banfield RE, 2004, LECT NOTES COMPUT SC, V3077, P223
[4]   Digital volume correlation: Three-dimensional strain mapping using X-ray tomography [J].
Bay, BK ;
Smith, TS ;
Fyhrie, DP ;
Saad, M .
EXPERIMENTAL MECHANICS, 1999, 39 (03) :217-226
[5]  
Bishop CM, 1995, Neural Networks for Pattern Recognition
[6]  
Boehler J.P., 1987, Applications of Tensor Functions in Solid Mechanics
[7]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[8]  
Cahn R., 2014, Semi-Simple Lie Algebras and Their Representations
[9]   Development and application of a cubic eddy-viscosity model of turbulence [J].
Craft, TJ ;
Launder, BE ;
Suga, K .
INTERNATIONAL JOURNAL OF HEAT AND FLUID FLOW, 1996, 17 (02) :108-115
[10]   Training invariant support vector machines [J].
Decoste, D ;
Schölkopf, B .
MACHINE LEARNING, 2002, 46 (1-3) :161-190