Automated synthesis of steady-state continuous processes using reinforcement learning

被引:18
|
作者
Goettl, Quirin [1 ]
Grimm, Dominik G. [2 ,3 ,4 ]
Burger, Jakob [1 ]
机构
[1] Tech Univ Munich, Campus Straubing Biotechnol & Sustainabil, Lab Chem Proc Engn, D-94315 Straubing, Germany
[2] Tech Univ Munich, Campus Straubing Biotechnol & Sustainabil, D-94315 Straubing, Germany
[3] Weihenstephan Triesdorf Univ Appl Sci, D-94315 Straubing, Germany
[4] Tech Univ Munich, Dept Informat, D-85748 Garching, Germany
关键词
automated process synthesis; flowsheet synthesis; artificial intelligence; machine learning; reinforcement learning; ARTIFICIAL-INTELLIGENCE; PROCESS SIMULATION; OPTIMIZATION; CHALLENGES; DESIGN; SYSTEM; MODEL; GO;
D O I
10.1007/s11705-021-2055-9
中图分类号
TQ [化学工业];
学科分类号
0817 ;
摘要
Automated flowsheet synthesis is an important field in computer-aided process engineering. The present work demonstrates how reinforcement learning can be used for automated flowsheet synthesis without any heuristics or prior knowledge of conceptual design. The environment consists of a steady-state flowsheet simulator that contains all physical knowledge. An agent is trained to take discrete actions and sequentially build up flowsheets that solve a given process problem. A novel method named SynGameZero is developed to ensure good exploration schemes in the complex problem. Therein, flowsheet synthesis is modelled as a game of two competing players. The agent plays this game against itself during training and consists of an artificial neural network and a tree search for forward planning. The method is applied successfully to a reaction-distillation process in a quaternary system.
引用
收藏
页码:288 / 302
页数:15
相关论文
共 50 条
  • [1] Automated synthesis of steady-state continuous processes using reinforcement learning
    Quirin Göttl
    Dominik G. Grimm
    Jakob Burger
    Frontiers of Chemical Science and Engineering, 2022, 16 : 288 - 302
  • [2] Automated synthesis of steady-state continuous processes using reinforcement learning
    Gttl Quirin
    Grimm Dominik G
    Burger Jakob
    Frontiers of Chemical Science and Engineering, 2022, 16 (02) : 288 - 302
  • [3] Automated Flowsheet Synthesis Using Hierarchical Reinforcement Learning: Proof of Concept
    Gottl, Quirin
    Tonges, Yannic
    Grimm, Dominik G.
    Burger, Jakob
    CHEMIE INGENIEUR TECHNIK, 2021, 93 (12) : 2010 - 2018
  • [4] Predicting steady-state biogas production from waste using advanced machine learning-metaheuristic approaches
    Sun, Yesen
    Dai, Hong-liang
    Moayedi, Hossein
    Le, Binh Nguyen
    Adnan, Rana Muhammad
    FUEL, 2024, 355
  • [5] Steady-State Error Compensation for Reinforcement Learning with Quadratic Rewards
    Wang, Liyao
    Zheng, Zishun
    Lin, Yuan
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 1608 - 1613
  • [6] Steady-State Error Compensation for Reinforcement Learning-Based Control of Power Electronic Systems
    Weber, Daniel
    Schenke, Maximilian
    Wallscheid, Oliver
    IEEE ACCESS, 2023, 11 : 76524 - 76536
  • [7] Steady-state iterative learning control for a class of nonlinear PDE processes
    Huang, Deqing
    Xu, Jian-Xin
    JOURNAL OF PROCESS CONTROL, 2011, 21 (08) : 1155 - 1163
  • [8] Automated Steady and Transient State Identification in Noisy Processes
    Rhinehart, R. Russell
    2013 AMERICAN CONTROL CONFERENCE (ACC), 2013, : 4477 - 4493
  • [9] REINFORCEMENT LEARNING USING GAUSSIAN PROCESSES FOR DISCRETELY CONTROLLED CONTINUOUS PROCESSES
    De Paula, M.
    Martinez, E. C.
    LATIN AMERICAN APPLIED RESEARCH, 2013, 43 (03) : 249 - 254
  • [10] Automated Design of Analog Circuits Using Reinforcement Learning
    Settaluri, Keertana
    Liu, Zhaokai
    Khurana, Rishubh
    Mirhaj, Arash
    Jain, Rajeev
    Nikolic, Borivoje
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (09) : 2794 - 2807