Safe model-based reinforcement learning for nonlinear optimal control with state and input constraints

被引:17
|
作者
Kim, Yeonsoo [1 ]
Kim, Jong Woo [2 ]
机构
[1] Kwangwoon Univ, Dept Chem Engn, 20 Kwangwoon Ro, Seoul 01897, South Korea
[2] Tech Univ Berlin, Chair Bioproc Engn, Berlin, Germany
基金
新加坡国家研究基金会;
关键词
approximate dynamic programming; barrier function; control Lyapunov function; reinforcement learning; Sontag's formula; PROGRAMS; SYSTEMS;
D O I
10.1002/aic.17601
中图分类号
TQ [化学工业];
学科分类号
0817 ;
摘要
Safety is a critical factor in reinforcement learning (RL) in chemical processes. In our previous work, we had proposed a new stability-guaranteed RL for unconstrained nonlinear control-affine systems. In the approximate policy iteration algorithm, a Lyapunov neural network (LNN) was updated while being restricted to the control Lyapunov function, and a policy was updated using a variation of Sontag's formula. In this study, we additionally consider state and input constraints by introducing a barrier function, and we extend the applicable type to general nonlinear systems. We augment the constraints into the objective function and use the LNN added with a Lyapunov barrier function to approximate the augmented value function. Sontag's formula input with this approximate function brings the states into its lower level set, thereby guaranteeing the constraints satisfaction and stability. We prove the practical asymptotic stability and forward invariance. The effectiveness is validated using four tank system simulations.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] Constrained Differentiable Cross-Entropy Method for Safe Model-based Reinforcement Learning
    Mottahedi, Sam
    Pavlak, Gregory S.
    PROCEEDINGS OF THE 2022 THE 9TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION, BUILDSYS 2022, 2022, : 40 - 48
  • [42] Safe reinforcement learning for discrete-time fully cooperative games with partial state and control constraints using control barrier functions
    Liu, Shihan
    Liu, Lijun
    Yu, Zhen
    NEUROCOMPUTING, 2023, 517 : 118 - 132
  • [43] Hybrid control for combining model-based and model-free reinforcement learning
    Pinosky, Allison
    Abraham, Ian
    Broad, Alexander
    Argall, Brenna
    Murphey, Todd D.
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2023, 42 (06) : 337 - 355
  • [44] Model-Based Reinforcement Learning Control of Electrohydraulic Position Servo Systems
    Yao, Zhikai
    Liang, Xianglong
    Jiang, Guo-Ping
    Yao, Jianyong
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2023, 28 (03) : 1446 - 1455
  • [45] Model-based reinforcement learning control of reaction-diffusion problems
    Schenk, Christina
    Vasudevan, Aditya
    Haranczyk, Maciej
    Romero, Ignacio
    OPTIMAL CONTROL APPLICATIONS & METHODS, 2024, 45 (06) : 2897 - 2914
  • [46] Transmission Control in NB-IoT With Model-Based Reinforcement Learning
    Alcaraz, Juan J.
    Losilla, Fernando
    Gonzalez-Castano, Francisco-Javier
    IEEE ACCESS, 2023, 11 : 57991 - 58005
  • [47] Delay-aware model-based reinforcement learning for continuous control
    Chen, Baiming
    Xu, Mengdi
    Li, Liang
    Zhao, Ding
    NEUROCOMPUTING, 2021, 450 : 119 - 128
  • [48] Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints (vol 87, pg 553, 2014)
    Yang, Xiong
    Liu, Derong
    Wang, Ding
    INTERNATIONAL JOURNAL OF CONTROL, 2014, 87 (03) : I - I
  • [49] Reinforcement learning-based optimal control of unknown constrained-input nonlinear systems using simulated experience
    Asl, Hamed Jabbari
    Uchibe, Eiji
    NONLINEAR DYNAMICS, 2023, 111 (17) : 16093 - 16110
  • [50] Safe Reinforcement Learning and Adaptive Optimal Control With Applications to Obstacle Avoidance Problem
    Wang, Ke
    Mu, Chaoxu
    Ni, Zhen
    Liu, Derong
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (03) : 4599 - 4612