Reinforcement learning solution for HJB equation arising in constrained optimal control problem

被引:98
作者
Luo, Biao [1 ]
Wu, Huai-Ning [2 ]
Huang, Tingwen [3 ]
Liu, Derong [4 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
[2] Beijing Univ Aeronaut & Astronaut, Beihang Univ, Sci & Technol Aircraft Control Lab, Beijing 100191, Peoples R China
[3] Texas A&M Univ Qatar, Doha, Qatar
[4] Univ Sci & Technol Beijing, Sch Automat & Elect Engn, Beijing 100083, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Constrained optimal control; Data-based; Off-policy reinforcement learning; Hamilton-Jacobi-Bellman equation; The method of weighted residuals; ADAPTIVE OPTIMAL-CONTROL; TIME NONLINEAR-SYSTEMS; DYNAMIC-PROGRAMMING ALGORITHM; POLICY ITERATION; STABILIZATION; DESIGN;
D O I
10.1016/j.neunet.2015.08.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. (C) 2015 Elsevier Ltd. All rights reserved.
引用
收藏
页码:150 / 158
页数:9
相关论文
共 43 条
  • [1] Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach
    Abu-Khalaf, M
    Lewis, FL
    [J]. AUTOMATICA, 2005, 41 (05) : 779 - 791
  • [2] Neurodynamic programming and zero-sum games for constrained control systems
    Abu-Khalaf, Murad
    Lewis, Frank L.
    Huang, Jie
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (07): : 1243 - 1252
  • [3] Anderson B. D., 2007, OPTIMAL CONTROL LINE
  • [4] [Anonymous], 2010, INT C MACH LEARN
  • [5] [Anonymous], 2016, DYNAMIC PROGRAMMING
  • [6] [Anonymous], 2007, Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)
  • [7] Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation
    Beard, RW
    Saridis, GN
    Wen, JT
    [J]. AUTOMATICA, 1997, 33 (12) : 2159 - 2177
  • [8] Generalized Hamilton-Jacobi-Blellman formulation-based neural network control of affine nonlinear discrete-time systems
    Chen, Zheng
    Jagannathan, Sarangapani
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (01): : 90 - 106
  • [9] Reinforcement learning in continuous time and space
    Doya, K
    [J]. NEURAL COMPUTATION, 2000, 12 (01) : 219 - 245
  • [10] Continuous action reinforcement learning for control-affine systems with unknown dynamics
    Faust, Aleksandra
    Ruymgaart, Peter
    Salman, Molly
    Fierro, Rafael
    Tapia, Lydia
    [J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1 (03) : 323 - 336