Safe model-based reinforcement learning for nonlinear optimal control with state and input constraints

被引:17
|
作者
Kim, Yeonsoo [1 ]
Kim, Jong Woo [2 ]
机构
[1] Kwangwoon Univ, Dept Chem Engn, 20 Kwangwoon Ro, Seoul 01897, South Korea
[2] Tech Univ Berlin, Chair Bioproc Engn, Berlin, Germany
基金
新加坡国家研究基金会;
关键词
approximate dynamic programming; barrier function; control Lyapunov function; reinforcement learning; Sontag's formula; PROGRAMS; SYSTEMS;
D O I
10.1002/aic.17601
中图分类号
TQ [化学工业];
学科分类号
0817 ;
摘要
Safety is a critical factor in reinforcement learning (RL) in chemical processes. In our previous work, we had proposed a new stability-guaranteed RL for unconstrained nonlinear control-affine systems. In the approximate policy iteration algorithm, a Lyapunov neural network (LNN) was updated while being restricted to the control Lyapunov function, and a policy was updated using a variation of Sontag's formula. In this study, we additionally consider state and input constraints by introducing a barrier function, and we extend the applicable type to general nonlinear systems. We augment the constraints into the objective function and use the LNN added with a Lyapunov barrier function to approximate the augmented value function. Sontag's formula input with this approximate function brings the states into its lower level set, thereby guaranteeing the constraints satisfaction and stability. We prove the practical asymptotic stability and forward invariance. The effectiveness is validated using four tank system simulations.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Safe optimal robust control of nonlinear systems with asymmetric input constraints using reinforcement learning
    Zhang, Dehua
    Wang, Yuchen
    Jiang, Kaijun
    Liang, Linlin
    APPLIED INTELLIGENCE, 2024, 54 (01) : 1 - 13
  • [2] Safe optimal robust control of nonlinear systems with asymmetric input constraints using reinforcement learning
    Dehua Zhang
    Yuchen Wang
    Kaijun Jiang
    Linlin Liang
    Applied Intelligence, 2024, 54 : 1 - 13
  • [3] Optimal control for a class of nonlinear systems with input constraints based on reinforcement learning
    Luo A.
    Xiao W.-B.
    Zhou Q.
    Lu R.-Q.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2022, 39 (01): : 154 - 164
  • [4] Model-based safe reinforcement learning for nonlinear systems under uncertainty with constraints tightening approach
    Kim, Yeonsoo
    Oh, Tae Hoon
    COMPUTERS & CHEMICAL ENGINEERING, 2024, 183
  • [5] Safe reinforcement learning for affine nonlinear systems with state constraints and input saturation using control barrier functions
    Liu, Shihan
    Liu, Lijun
    Yu, Zhen
    NEUROCOMPUTING, 2023, 518 : 562 - 576
  • [6] Safe control of nonlinear systems in LPV framework using model-based reinforcement learning
    Bao, Yajie
    Velni, Javad Mohammadpour
    INTERNATIONAL JOURNAL OF CONTROL, 2023, 96 (04) : 1078 - 1089
  • [7] Multiple model-based reinforcement learning for nonlinear control
    Samejima, K
    Katagiri, K
    Doya, K
    Kawato, M
    ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE, 2006, 89 (09): : 54 - 69
  • [8] Model-based reinforcement learning for nonlinear optimal control with practical asymptotic stability guarantees
    Kim, Yeonsoo
    Lee, Jong Min
    AICHE JOURNAL, 2020, 66 (10)
  • [9] Critic Learning-Based Safe Optimal Control for Nonlinear Systems with Asymmetric Input Constraints and Unmatched Disturbances
    Qin, Chunbin
    Jiang, Kaijun
    Zhang, Jishi
    Zhu, Tianzeng
    ENTROPY, 2023, 25 (07)
  • [10] Model-based reinforcement learning for output-feedback optimal control of a class of nonlinear systems
    Self, Ryan
    Harlan, Michael
    Kamalapurkar, Rushikesh
    2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 2378 - 2383