FlowDNN: a physics-informed deep neural network for fast and accurate flow prediction

被引:34
作者
Chen, Donglin [1 ]
Gao, Xiang [1 ,2 ]
Xu, Chuanfu [1 ,2 ]
Wang, Siqi [1 ,2 ]
Chen, Shizhao [1 ]
Fang, Jianbin [1 ]
Wang, Zheng [3 ]
机构
[1] Natl Univ Def Technol, Coll Comp, Changsha 410073, Peoples R China
[2] Natl Univ Def Technol, State Key Lab High Performance Comp, Changsha 410073, Peoples R China
[3] Univ Leeds, Sch Comp, Leeds LS2 9JT, W Yorkshire, England
基金
中国国家自然科学基金;
关键词
Deep neural network; Flow prediction; Attention mechanism; Physics-informed loss; TP391; SIMULATIONS;
D O I
10.1631/FITEE.2000435
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
for flow-related design optimization problems, e.g., aircraft and automobile aerodynamic design, computational fluid dynamics (CFD) simulations are commonly used to predict flow fields and analyze performance. While important, CFD simulations are a resource-demanding and time-consuming iterative process. The expensive simulation overhead limits the opportunities for large design space exploration and prevents interactive design. In this paper, we propose FlowDNN, a novel deep neural network (DNN) to efficiently learn flow representations from CFD results. FlowDNN saves computational time by directly predicting the expected flow fields based on given flow conditions and geometry shapes. FlowDNN is the first DNN that incorporates the underlying physical conservation laws of fluid dynamics with a carefully designed attention mechanism for steady flow prediction. This approach not only improves the prediction accuracy, but also preserves the physical consistency of the predicted flow fields, which is essential for CFD. Various metrics are derived to evaluate FlowDNN with respect to the whole flow fields or regions of interest (RoIs) (e.g., boundary layers where flow quantities change rapidly). Experiments show that FlowDNN significantly outperforms alternative methods with faster inference and more accurate results. It speeds up a graphics processing unit (GPU) accelerated CFD solver by more than 14 000x, while keeping the prediction error under 5%.
引用
收藏
页码:207 / 219
页数:13
相关论文
共 34 条
[1]   TraVeLGAN: Image-to-image Translation by Transformation Vector Learning [J].
Amodio, Matthew ;
Krishnaswamy, Smita .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :8975-8984
[2]   Reasonable design space approach to response surface approximation [J].
Balabanov, VO ;
Giunta, AA ;
Golovidov, O ;
Grossman, B ;
Mason, WH ;
Watson, LT ;
Haftka, RT .
JOURNAL OF AIRCRAFT, 1999, 36 (01) :308-315
[3]   Prediction of aerodynamic flow fields using convolutional neural networks [J].
Bhatnagar, Saakaar ;
Afshar, Yaser ;
Pan, Shaowu ;
Duraisamy, Karthik ;
Kaushik, Shailendra .
COMPUTATIONAL MECHANICS, 2019, 64 (02) :525-545
[4]  
Blazek J., 2015, COMPUTATIONAL FLUID, V3rd, P466
[5]  
Constantin P., 1988, Navier-Stokes equations, P199, DOI [10.7208/chicago/9780226764320.001.0001, DOI 10.7208/CHICAGO/9780226764320.001.0001]
[6]  
Daberkow DD, 1998, WORLD AV C EXP, DOI [10.4271/985509, DOI 10.4271/985509]
[7]   NON-LINEAR MODEL BOLTZMANN EQUATIONS AND EXACT-SOLUTIONS [J].
ERNST, MH .
PHYSICS REPORTS-REVIEW SECTION OF PHYSICS LETTERS, 1981, 78 (01) :1-171
[8]  
Farrashkhalvat M., 2003, Basic Structured Grid Generation, P190, DOI 10.1016/B978-075065058-8/50008-3
[9]  
Frankle J., 2019, arXiv
[10]   Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks [J].
Geneva, Nicholas ;
Zabaras, Nicholas .
JOURNAL OF COMPUTATIONAL PHYSICS, 2019, 383 :125-147