End-to-end reinforcement learning of Koopman models for economic nonlinear model predictive control

被引:2
作者
Mayfrank, Daniel [1 ,4 ]
Mitsos, Alexander [1 ,2 ,3 ]
Dahmen, Manuel [1 ]
机构
[1] Forschungszentrum Julich, Inst Climate & Energy Syst Energy Syst Engn ICE 1, D-52425 Julich, Germany
[2] Rhein Westfal TH Aachen, Proc Syst Engn AVTSVT, D-52074 Aachen, Germany
[3] JARA ENERGY, D-52425 Julich, Germany
[4] Rhein Westfal TH Aachen, D-52062 Aachen, Germany
关键词
Economic model predictive control; Koopman; Reinforcement learning; End-to-end learning; OPERATOR; SYSTEMS;
D O I
10.1016/j.compchemeng.2024.108824
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
(Economic) nonlinear model predictive control ((e)NMPC) requires dynamic models that are sufficiently accurate and computationally tractable. Data-driven surrogate models for mechanistic models can reduce the computational burden of (e)NMPC; however, such models are typically trained by system identification for maximum prediction accuracy on simulation samples and perform suboptimally in (e)NMPC. We present a method for end-to-end reinforcement learning of Koopman surrogate models for optimal performance as part of (e)NMPC. We apply our method to two applications derived from an established nonlinear continuous stirred- tank reactor model. The controller performance is compared to that of (e)NMPCs utilizing models trained using system identification, and model-free neural network controllers trained using reinforcement learning. We show that the end-to-end trained models outperform those trained using system identification in (e)NMPC, and that, in contrast to the neural network controllers, the (e)NMPC controllers can react to changes in the control setting without retraining.
引用
收藏
页数:12
相关论文
共 50 条
[41]   Autonomous Vehicle Control: End-to-End Learning in Simulated Urban Environments [J].
Haavaldsen, Hege ;
Aasbo, Max ;
Lindseth, Frank .
NORDIC ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2019, 1056 :40-51
[42]   END-TO-END LEARNING FOR MUSIC AUDIO [J].
Dieleman, Sander ;
Schrauwen, Benjamin .
2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
[43]   Amharic OCR: An End-to-End Learning [J].
Belay, Birhanu ;
Habtegebrial, Tewodros ;
Meshesha, Million ;
Liwicki, Marcus ;
Belay, Gebeyehu ;
Stricker, Didier .
APPLIED SCIENCES-BASEL, 2020, 10 (03)
[44]   Optimizing energy conversion efficiency of nonlinear wave energy converters via robust Koopman economic model predictive control [J].
Liu, Zhimin ;
Jia, Yubin .
OCEAN ENGINEERING, 2025, 337
[45]   Learning Driving Models From Parallel End-to-End Driving Data Set [J].
Chen, Long ;
Wang, Qing ;
Lu, Xiankai ;
Cao, Dongpu ;
Wang, Fei-Yue .
PROCEEDINGS OF THE IEEE, 2020, 108 (02) :262-273
[46]   On the Uses of Large Language Models to Design End-to-end Learning Semantic Communication [J].
Wang, Ying ;
Sun, Zhuo ;
Fan, Jinpo ;
Ma, Hao .
2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
[47]   Comparison of Reinforcement Learning and Model Predictive Control for a Nonlinear Continuous Process [J].
Rajpoot, Vikas ;
Munusamy, Sudhakar ;
Joshi, Tanuja ;
Patil, Dinesh ;
Pinnamaraju, Vivek .
IFAC PAPERSONLINE, 2024, 57 :304-308
[48]   Deep Reinforcement Learning-Driven Optimization of End-to-End Key Provision in QKD Systems [J].
Seok, Yeongjun ;
Kim, Ju-Bong ;
Han, Youn-Hee ;
Lim, Hyun-Kyo ;
Lee, Chankyun ;
Lee, Wonhyuk .
JOURNAL OF NETWORK AND SYSTEMS MANAGEMENT, 2025, 33 (02)
[49]   Sensing-assisted End-to-end Beamforming based on Reinforcement Learning in LEO Satellite Communications [J].
Qi, Xuan ;
He, Xinxin ;
Li, Dianang ;
Chen, Xu .
IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA, ICCC WORKSHOPS 2024, 2024, :699-704
[50]   End-to-End Risk-aware Reinforcement Learning to Detect Asymptomatic Cases in Healthcare Facilities [J].
Thong, Yongjian ;
Huang, Weiyu ;
Adhikari, Bijaya .
2024 IEEE 12TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS, ICHI 2024, 2024, :83-92