Increasing the Flexibility of Hydropower with Reinforcement Learning on a Digital Twin Platform

被引:8
作者
Tubeuf, Carlotta [1 ]
Birkelbach, Felix [1 ]
Maly, Anton [1 ]
Hofmann, Rene [1 ]
机构
[1] TU Wien, Inst Energy Syst & Thermodynam, Getreidemarkt 9-E302, A-1060 Vienna, Austria
关键词
reinforcement learning; hydropower; digital twin; pumped storage; transfer learning;
D O I
10.3390/en16041796
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
The increasing demand for flexibility in hydropower systems requires pumped storage power plants to change operating modes and compensate reactive power more frequently. In this work, we demonstrate the potential of applying reinforcement learning (RL) to control the blow-out process of a hydraulic machine during pump start-up and when operating in synchronous condenser mode. Even though RL is a promising method that is currently getting much attention, safety concerns are stalling research on RL for the control of energy systems. Therefore, we present a concept that enables process control with RL through the use of a digital twin platform. This enables the safe and effective transfer of the algorithm's learning strategy from a virtual test environment to the physical asset. The successful implementation of RL in a test environment is presented and an outlook on future research on the transfer to a model test rig is given.
引用
收藏
页数:10
相关论文
共 27 条
[1]   Tree-based reinforcement learning for optimal water reservoir operation [J].
Castelletti, A. ;
Galelli, S. ;
Restelli, M. ;
Soncini-Sessa, R. .
WATER RESOURCES RESEARCH, 2010, 46
[2]   Monitoring 4.0 of penstocks: digital twin for fatigue assessment [J].
Dreyer, M. ;
Nicolet, C. ;
Gaspoz, A. ;
Goncalves, N. ;
Rey-Mermet, S. ;
Boulicaut, B. .
30TH IAHR SYMPOSIUM ON HYDRAULIC MACHINERY AND SYSTEMS (IAHR 2020), 2021, 774
[3]   Assessment of the European potential for pumped hydropower energy storage based on two existing reservoirs [J].
Gimeno-Gutierrez, Marcos ;
Lacal-Arantegui, Roberto .
RENEWABLE ENERGY, 2015, 75 :856-868
[4]   Transferring policy of deep reinforcement learning from simulation to reality for robotics [J].
Ju, Hao ;
Juan, Rongshun ;
Gomez, Randy ;
Nakamura, Keisuke ;
Li, Guangliang .
NATURE MACHINE INTELLIGENCE, 2022, 4 (12) :1077-1087
[5]  
Junzheng Li, 2021, 2021 11th International Conference on Information Science and Technology (ICIST), P432, DOI 10.1109/ICIST52614.2021.9440555
[6]   Toward a Practical Digital Twin Platform Tailored to the Requirements of Industrial Energy Systems [J].
Kasper, Lukas ;
Birkelbach, Felix ;
Schwarzmayr, Paul ;
Steindl, Gernot ;
Ramsauer, Daniel ;
Hofmann, Rene .
APPLIED SCIENCES-BASEL, 2022, 12 (14)
[7]   Reinforcement learning in robotics: A survey [J].
Kober, Jens ;
Bagnell, J. Andrew ;
Peters, Jan .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1238-1274
[8]   Analysis of emerging technologies in the hydropower sector [J].
Kougias, Ioannis ;
Aggidis, George ;
Avellan, Francois ;
Deniz, Sabri ;
Lundin, Urban ;
Moro, Alberto ;
Muntean, Sebastian ;
Novara, Daniele ;
Ignacio Perez-Diaz, Juan ;
Quaranta, Emanuele ;
Schild, Philippe ;
Theodossiou, Nicolaos .
RENEWABLE & SUSTAINABLE ENERGY REVIEWS, 2019, 113
[9]  
Langford John, 2017, Encyclopedia of Machine Learning and Data Mining., P389, DOI [10.1007/978-1-4899-7687-1244, DOI 10.1007/978-1-4899-7687-1244]
[10]   Machine learning: Overview of the recent progresses and implications for the process systems engineering field [J].
Lee, Jay H. ;
Shin, Joohyun ;
Realff, Matthew J. .
COMPUTERS & CHEMICAL ENGINEERING, 2018, 114 :111-121