Active flow control of a turbulent separation bubble through deep reinforcement learning

被引:12
作者
Font, Bernat [1 ,2 ]
Alcantara-Avila, Francisco [3 ]
Rabault, Jean
Vinuesa, Ricardo [3 ]
Lehmkuhl, Oriol [1 ]
机构
[1] Barcelona Supercomp Ctr, Barcelona 08034, Spain
[2] Delft Univ Technol, Fac Mech Engn, Delft, Netherlands
[3] KTH Royal Inst Technol, FLOW, Engn Mech, Stockholm, Sweden
来源
5TH MADRID TURBULENCE WORKSHOP | 2024年 / 2753卷
基金
欧洲研究理事会;
关键词
DIRECT NUMERICAL-SIMULATION; BOUNDARY-LAYERS; SYNTHETIC JET;
D O I
10.1088/1742-6596/2753/1/012022
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
The control efficacy of classical periodic forcing and deep reinforcement learning (DRL) is assessed for a turbulent separation bubble (TSB) at Re-tau = 180 on the upstream region before separation occurs. The TSB can resemble a separation phenomenon naturally arising in wings, and a successful reduction of the TSB can have practical implications in the reduction of the aviation carbon footprint. We find that the classical zero-net-mas-flux (ZNMF) periodic control is able to reduce the TSB by 15.7%. On the other hand, the DRL-based control achieves 25.3% reduction and provides a smoother control strategy while also being ZNMF. To the best of our knowledge, the current test case is the highest Reynolds-number flow that has been successfully controlled using DRL to this date. In future work, these results will be scaled to well-resolved large-eddy simulation grids. Furthermore, we provide details of our open-source CFD-DRL framework suited for the next generation of exascale computing machines.
引用
收藏
页数:18
相关论文
共 53 条
[1]   Reynolds-number dependence of wall-pressure fluctuations in a pressure-induced turbulent separation bubble [J].
Abe, Hiroyuki .
JOURNAL OF FLUID MECHANICS, 2017, 833 :563-598
[2]  
Aram S., 2011, INT J FLOW CONTROL, V3, P87, DOI DOI 10.1260/1756-8250.3.2-3.87
[3]   Uniform blowing and suction applied to nonuniform adverse-pressure-gradient wing boundary layers [J].
Atzori, Marco ;
Vinuesa, Ricardo ;
Stroh, Alexander ;
Gatti, Davide ;
Frohnapfel, Bettina ;
Schlatter, Philipp .
PHYSICAL REVIEW FLUIDS, 2021, 6 (11)
[4]   Scientific multi-agent reinforcement learning for wall-models of turbulent flows [J].
Bae, H. Jane ;
Koumoutsakos, Petros .
NATURE COMMUNICATIONS, 2022, 13 (01)
[5]   Controlling Rayleigh-Benard convection via reinforcement learning [J].
Beintema, Gerben ;
Corbetta, Alessandro ;
Biferale, Luca ;
Toschi, Federico .
JOURNAL OF TURBULENCE, 2020, 21 (9-10) :585-605
[6]  
Belus V, 2019, AIP Adv, P9
[7]   History effects and near equilibrium in adverse-pressure-gradient turbulent boundary layers [J].
Bobke, A. ;
Vinuesa, R. ;
Orlu, R. ;
Schlatter, P. .
JOURNAL OF FLUID MECHANICS, 2017, 820 :667-692
[8]   Large-eddy simulations of adverse pressure gradient turbulent boundary layers [J].
Bobke, Alexandra ;
Vinuesa, Ricardo ;
Orlu, Ramis ;
Schlatter, Philipp .
2ND MULTIFLOW SUMMER SCHOOL ON TURBULENCE, 2016, 708
[9]   Actuators for Active Flow Control [J].
Cattafesta, Louis N., III ;
Sheplak, Mark .
ANNUAL REVIEW OF FLUID MECHANICS, VOL 43, 2011, 43 :247-272
[10]   Control of Flow Separation in a Turbulent Boundary Layer Using Time-Periodic Forcing [J].
Cho, Minjeong ;
Choi, Sangho ;
Choi, Haecheon .
JOURNAL OF FLUIDS ENGINEERING-TRANSACTIONS OF THE ASME, 2016, 138 (10)