Gym-preCICE: Reinforcement learning environments for active flow control

被引:2
|
作者
Shams, Mosayeb [1 ]
Elsheikh, Ahmed H. [1 ]
机构
[1] Heriot Watt Univ, Edinburgh, Scotland
基金
英国工程与自然科学研究理事会;
关键词
Reinforcement learning; Active flow control; Gymnasium; OpenAI Gym; preCICE; GO;
D O I
10.1016/j.softx.2023.101446
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Active flow control (AFC) involves manipulating fluid flow over time to achieve a desired performance or efficiency. AFC, as a sequential optimisation task, can benefit from utilising Reinforcement Learning (RL) for dynamic optimisation. In this work, we introduce Gym-preCICE, a Python adapter fully compliant with Gymnasium API to facilitate designing and developing RL environments for single -and multi-physics AFC applications. In an actor-environment setting, Gym-preCICE takes advantage of preCICE, an open-source coupling library for partitioned multi-physics simulations, to handle information exchange between a controller (actor) and an AFC simulation environment. Gym-preCICE provides a framework for seamless non-invasive integration of RL and AFC, as well as a playground for applying RL algorithms in various AFC-related engineering applications. & COPY; 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Optimal Control of Active Distribution Network using Deep Reinforcement Learning
    Tahir, Yameena
    Khan, Muhammad Faisal Nadeem
    Sajjad, Intisar Ali
    Martirano, Luigi
    2022 IEEE INTERNATIONAL CONFERENCE ON ENVIRONMENT AND ELECTRICAL ENGINEERING AND 2022 IEEE INDUSTRIAL AND COMMERCIAL POWER SYSTEMS EUROPE (EEEIC / I&CPS EUROPE), 2022,
  • [42] Active Flow Control for Drag Reduction Through Multi-agent Reinforcement Learning on a Turbulent Cylinder at ReD=3900
    Suarez, Pol
    Alcantara-avila, Francisco
    Miro, Arnau
    Rabault, Jean
    Font, Bernat
    Lehmkuhl, Oriol
    Vinuesa, Ricardo
    FLOW TURBULENCE AND COMBUSTION, 2025,
  • [43] Attitude control of underwater glider combined reinforcement learning with active disturbance rejection control
    Su, Zhi-qiang
    Zhou, Meng
    Han, Fang-fang
    Zhu, Yi-wu
    Song, Da-lei
    Guo, Ting-ting
    JOURNAL OF MARINE SCIENCE AND TECHNOLOGY, 2019, 24 (03) : 686 - 704
  • [44] Active and Reactive Power Coordinated Control of Active Distribution Networks Based on Prioritized Reinforcement Learning
    Wang, Xinming
    Liu, Haotian
    Cao, Xin
    Wu, Wenchuan
    Li, Shihui
    Jia, Xiaobu
    2021 POWER SYSTEM AND GREEN ENERGY CONFERENCE (PSGEC), 2021, : 91 - 96
  • [45] Deep active reinforcement learning for privacy preserve data mining in 5G environments
    Ahmed, Usman
    Lin, Jerry Chun-Wei
    Srivastava, Gautam
    Chen, Hsing-Chung
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2022, 42 (05) : 4751 - 4758
  • [46] Improving reinforcement learning with human assistance: an argument for human subject studies with HIPPO Gym
    Matthew E. Taylor
    Nicholas Nissen
    Yuan Wang
    Neda Navidi
    Neural Computing and Applications, 2023, 35 : 23429 - 23439
  • [47] Improving reinforcement learning with human assistance: an argument for human subject studies with HIPPO Gym
    Taylor, Matthew E.
    Nissen, Nicholas
    Wang, Yuan
    Navidi, Neda
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (32) : 23429 - 23439
  • [48] Advising reinforcement learning toward scaling agents in continuous control environments with sparse rewards
    Ren, Hailin
    Ben-Tzvi, Pinhas
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 90
  • [49] Adaptive Service Performance Control using Cooperative Fuzzy Reinforcement Learning in Virtualized Environments
    Ibidunmoye, Olumuyiwa
    Moghadam, Mahshid Helali
    Lakew, Ewnetu Bayuh
    Elmroth, Erik
    PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING (UCC' 17), 2017, : 19 - 28
  • [50] Deep reinforcement learning for inventory control: A roadmap
    Boute, Robert N.
    Gijsbrechts, Joren
    van Jaarsveld, Willem
    Vanvuchelen, Nathalie
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2022, 298 (02) : 401 - 412