A review on deep reinforcement learning for fluid mechanics

被引:175
作者
Garnier, Paul [1 ]
Viquerat, Jonathan [1 ]
Rabault, Jean [2 ]
Larcher, Aurelien [1 ]
Kuhnle, Alexander [3 ]
Hachem, Elie [1 ]
机构
[1] PSL Res Univ, Ctr Mise Forme Mat CEMEF, MINES ParisTech, CNRS UMR 7635, F-06904 Sophia Antipolis, France
[2] Univ Oslo, Dept Math, N-0851 Oslo, Norway
[3] Univ Cambridge, Dept Comp Sci & Technol, Cambridge, England
关键词
Deep reinforcement learning; Fluid mechanics; SHAPE OPTIMIZATION;
D O I
10.1016/j.compfluid.2021.104973
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Deep reinforcement learning (DRL) has recently been adopted in a wide range of physics and engineering domains for its ability to solve decision-making problems that were previously out of reach due to a combination of non-linearity and high dimensionality. In the last few years, it has spread in the field of computational mechanics, and particularly in fluid dynamics, with recent applications in flow control and shape optimization. In this work, we conduct a detailed review of existing DRL applications to fluid mechanics problems. In addition, we present recent results that further illustrate the potential of DRL in Fluid Mechanics. The coupling methods used in each case are covered, detailing their advantages and limitations. Our review also focuses on the comparison with classical methods for optimal control and optimization. Finally, several test cases are described that illustrate recent progress made in this field. The goal of this publication is to provide an understanding of DRL capabilities along with state-of-the-art applications in fluid dynamics to researchers wishing to address new problems with these methods. (C) 2021 Elsevier Ltd. All rights reserved.
引用
收藏
页数:13
相关论文
共 60 条
[21]  
Ganapathysubramanian B, 2018, CORR, P1
[22]  
Garnier P., 2019, POSITION CONTROL CYL
[23]   Learning to school in the presence of hydro dynamic interactions [J].
Gazzola, M. ;
Tchieu, A. A. ;
Alexeev, D. ;
de Brauer, A. ;
Koumoutsakos, P. .
JOURNAL OF FLUID MECHANICS, 2016, 789 :726-749
[24]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
[25]   A statistical learning strategy for closed-loop control of fluid flows [J].
Gueniat, Florimond ;
Mathelin, Lionel ;
Hussaini, M. Yousuff .
THEORETICAL AND COMPUTATIONAL FLUID DYNAMICS, 2016, 30 (06) :497-510
[26]   Finding efficient swimming strategies in a three-dimensional chaotic flow by reinforcement learning [J].
Gustavsson, K. ;
Biferale, L. ;
Celani, A. ;
Colabrese, S. .
EUROPEAN PHYSICAL JOURNAL E, 2017, 40 (12)
[27]  
Hill A., 2018, Stable baselines
[28]  
Hochreiter S., 1997, Neural Computation, V9, P1735
[29]  
Houthooft R., 2018, ARXIV180204821
[30]  
Howard R. A., 1960, Dynamic programming and Markov processes