Automated synthesis of steady-state continuous processes using reinforcement learning

被引:19
作者
Goettl, Quirin [1 ]
Grimm, Dominik G. [2 ,3 ,4 ]
Burger, Jakob [1 ]
机构
[1] Tech Univ Munich, Campus Straubing Biotechnol & Sustainabil, Lab Chem Proc Engn, D-94315 Straubing, Germany
[2] Tech Univ Munich, Campus Straubing Biotechnol & Sustainabil, D-94315 Straubing, Germany
[3] Weihenstephan Triesdorf Univ Appl Sci, D-94315 Straubing, Germany
[4] Tech Univ Munich, Dept Informat, D-85748 Garching, Germany
关键词
automated process synthesis; flowsheet synthesis; artificial intelligence; machine learning; reinforcement learning; ARTIFICIAL-INTELLIGENCE; PROCESS SIMULATION; OPTIMIZATION; CHALLENGES; DESIGN; SYSTEM; MODEL; GO;
D O I
10.1007/s11705-021-2055-9
中图分类号
TQ [化学工业];
学科分类号
0817 ;
摘要
Automated flowsheet synthesis is an important field in computer-aided process engineering. The present work demonstrates how reinforcement learning can be used for automated flowsheet synthesis without any heuristics or prior knowledge of conceptual design. The environment consists of a steady-state flowsheet simulator that contains all physical knowledge. An agent is trained to take discrete actions and sequentially build up flowsheets that solve a given process problem. A novel method named SynGameZero is developed to ensure good exploration schemes in the complex problem. Therein, flowsheet synthesis is modelled as a game of two competing players. The agent plays this game against itself during training and consists of an artificial neural network and a tree search for forward planning. The method is applied successfully to a reaction-distillation process in a quaternary system.
引用
收藏
页码:288 / 302
页数:15
相关论文
共 50 条
[21]   Bridging Transient and Steady-State Performance in Voltage Control: A Reinforcement Learning Approach With Safe Gradient Flow [J].
Feng, Jie ;
Cui, Wenqi ;
Cortes, Jorge ;
Shi, Yuanyuan .
IEEE CONTROL SYSTEMS LETTERS, 2023, 7 :2845-2850
[22]   Automated calibration of somatosensory stimulation using reinforcement learning [J].
Borda, Luigi ;
Gozzi, Noemi ;
Preatoni, Greta ;
Valle, Giacomo ;
Raspopovic, Stanisa .
JOURNAL OF NEUROENGINEERING AND REHABILITATION, 2023, 20 (01)
[23]   Automated Vulnerability Exploitation Using Deep Reinforcement Learning [J].
Almajali, Anas ;
Al-Abed, Loiy ;
Yousef, Khalil M. Ahmad ;
Mohd, Bassam J. ;
Samamah, Zaid ;
Abu Shhadeh, Anas .
APPLIED SCIENCES-BASEL, 2024, 14 (20)
[24]   Formal Policy Synthesis for Continuous-State Systems via Reinforcement Learning [J].
Kazemi, Milad ;
Soudjani, Sadegh .
INTEGRATED FORMAL METHODS, IFM 2020, 2020, 12546 :3-21
[25]   A NOVEL REINFORCEMENT LEARNING STRATEGY FOR SEQUENTIAL DETECTION OF STEADY-STATE VISUAL EVOKED POTENTIAL-BASED BCI [J].
Cao, Lei ;
Jin, Yi ;
Wang, Zijan ;
Fan, Chunjiang .
JOURNAL OF NONLINEAR AND CONVEX ANALYSIS, 2023, 24 (08) :1819-1833
[26]   SEIG-based transient- and steady-state analysis using dragon fly approach [J].
Singh, Gurdiyal ;
Singh, V. R. .
SOFT COMPUTING, 2023, 27 (06) :2993-3005
[27]   Steady-state and dynamic simulation of a grinding mill using grind curves [J].
le Roux, Johan Derik ;
Steinboeck, Andreas ;
Kugi, Andreas ;
Craig, Ian Keith .
MINERALS ENGINEERING, 2020, 152
[28]   Highway Exiting Planner for Automated Vehicles Using Reinforcement Learning [J].
Cao, Zhong ;
Yang, Diange ;
Xu, Shaobing ;
Peng, Huei ;
Li, Boqi ;
Feng, Shuo ;
Zhao, Ding .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (02) :990-1000
[29]   Development of Automated Negotiation Models for Suppliers Using Reinforcement Learning [J].
Lee, Ga Hyun ;
Song, Byunghun ;
Jung, Jieun ;
Jeon, Hyun Woo .
ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS-PRODUCTION MANAGEMENT SYSTEMS FOR VOLATILE, UNCERTAIN, COMPLEX, AND AMBIGUOUS ENVIRONMENTS, APMS 2024, PT V, 2024, 732 :367-380
[30]   Automated Aircraft Stall Recovery using Reinforcement Learning and Supervised Learning Techniques [J].
Tomar, Dheerenrda Singh ;
Gauci, Jason ;
Dingli, Alexiei ;
Muscat, Alan ;
Mangion, David Zammit .
2021 IEEE/AIAA 40TH DIGITAL AVIONICS SYSTEMS CONFERENCE (DASC), 2021,