End-to-end reinforcement learning of Koopman models for economic nonlinear model predictive control

被引:0
作者
Mayfrank, Daniel [1 ,4 ]
Mitsos, Alexander [1 ,2 ,3 ]
Dahmen, Manuel [1 ]
机构
[1] Forschungszentrum Julich, Inst Climate & Energy Syst Energy Syst Engn ICE 1, D-52425 Julich, Germany
[2] Rhein Westfal TH Aachen, Proc Syst Engn AVTSVT, D-52074 Aachen, Germany
[3] JARA ENERGY, D-52425 Julich, Germany
[4] Rhein Westfal TH Aachen, D-52062 Aachen, Germany
关键词
Economic model predictive control; Koopman; Reinforcement learning; End-to-end learning; OPERATOR; SYSTEMS;
D O I
10.1016/j.compchemeng.2024.108824
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
(Economic) nonlinear model predictive control ((e)NMPC) requires dynamic models that are sufficiently accurate and computationally tractable. Data-driven surrogate models for mechanistic models can reduce the computational burden of (e)NMPC; however, such models are typically trained by system identification for maximum prediction accuracy on simulation samples and perform suboptimally in (e)NMPC. We present a method for end-to-end reinforcement learning of Koopman surrogate models for optimal performance as part of (e)NMPC. We apply our method to two applications derived from an established nonlinear continuous stirred- tank reactor model. The controller performance is compared to that of (e)NMPCs utilizing models trained using system identification, and model-free neural network controllers trained using reinforcement learning. We show that the end-to-end trained models outperform those trained using system identification in (e)NMPC, and that, in contrast to the neural network controllers, the (e)NMPC controllers can react to changes in the control setting without retraining.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Improvement of End-to-end Automatic Driving Algorithm Based on Reinforcement Learning
    Tang, Jianlin
    Li, Lingyun
    Ai, Yunfeng
    Zhao, Bin
    Ren, Liangcai
    Tian, Bin
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 5086 - 5091
  • [22] End-to-End Streaming Video Temporal Action Segmentation With Reinforcement Learning
    Zhang, Jin-Rong
    Wen, Wu-Jun
    Liu, Sheng-Lan
    Huang, Gao
    Li, Yun-Heng
    Li, Qi-Feng
    Feng, Lin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025,
  • [23] End-to-end Reinforcement Learning for Time-Optimal Quadcopter Flight
    Ferede, Robin
    De Wagter, Christophe
    Izzo, Dario
    de Croon, Guido C. H. E.
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 6172 - 6177
  • [24] SAROD: EFFICIENT END-TO-END OBJECT DETECTION ON SAR IMAGES WITH REINFORCEMENT LEARNING
    Kang, Junhyung
    Jeon, Hyeonseong
    Bang, Youngoh
    Woo, Simon S.
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1889 - 1893
  • [25] Reinforcement Learning-Based End-to-End Parking for Automatic Parking System
    Zhang, Peizhi
    Xiong, Lu
    Yu, Zhuoping
    Fang, Peiyuan
    Yan, Senwei
    Yao, Jie
    Zhou, Yi
    SENSORS, 2019, 19 (18)
  • [26] Interpretable End-to-End Urban Autonomous Driving With Latent Deep Reinforcement Learning
    Chen, Jianyu
    Li, Shengbo Eben
    Tomizuka, Masayoshi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (06) : 5068 - 5078
  • [27] Towards End-to-End Escape in Urban Autonomous Driving Using Reinforcement Learning
    Sakhai, Mustafa
    Wielgosz, Maciej
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, INTELLISYS 2023, 2024, 823 : 21 - 40
  • [28] Towards End-to-End Chase in Urban Autonomous Driving Using Reinforcement Learning
    Kolomanski, Michal
    Sakhai, Mustafa
    Nowak, Jakub
    Wielgosz, Maciej
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 3, 2023, 544 : 408 - 426
  • [29] Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement Learning
    Sharma, Archit
    Ahmed, Ahmed M.
    Ahmad, Rehaan
    Finn, Chelsea
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [30] End-to-end Deep Reinforcement Learning for Multi-agent Collaborative Exploration
    Chen, Zichen
    Subagdja, Budhitama
    Tan, Ah-Hwee
    2019 IEEE INTERNATIONAL CONFERENCE ON AGENTS (ICA), 2019, : 99 - 102