Relaxed Actor-Critic With Convergence Guarantees for Continuous-Time Optimal Control of Nonlinear Systems

被引:14
作者
Duan, Jingliang [1 ,2 ]
Li, Jie [2 ]
Ge, Qiang [2 ]
Li, Shengbo Eben [2 ]
Bujarbaruah, Monimoy [3 ]
Ma, Fei [1 ]
Zhang, Dezhao [4 ]
机构
[1] Univ Sci & Technol Beijing, Sch Mech Engn, Beijing 100083, Peoples R China
[2] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100084, Peoples R China
[3] Univ Calif Berkeley, Dept Mech Engn, Berkeley, CA 94720 USA
[4] Beijing Idriverplus Technol Co Ltd, Beijing 100192, Peoples R China
来源
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES | 2023年 / 8卷 / 05期
关键词
Heuristic algorithms; Convergence; Vehicle dynamics; Nonlinear dynamical systems; Mathematical models; Approximation algorithms; Infinite horizon; Reinforcement learning; continuous-time optimal control; nonlinear systems; SYNCHRONIZATION; ITERATION;
D O I
10.1109/TIV.2023.3255264
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents the Relaxed Continuous-Time Actor-critic (RCTAC) algorithm, a method for finding the nearly optimal policy for nonlinear continuous-time (CT) systems with known dynamics and infinite horizon, such as the path-tracking control of vehicles. RCTAC has several advantages over existing adaptive dynamic programming algorithms for CT systems. It does not require the "admissibility" of the initialized policy or the input-affine nature of controlled systems for convergence. Instead, given any initial policy, RCTAC can converge to an admissible, and subsequently nearly optimal policy for a general nonlinear system with a saturated controller. RCTAC consists of two phases: a warm-up phase and a generalized policy iteration phase. The warm-up phase minimizes the square of the Hamiltonian to achieve admissibility, while the generalized policy iteration phase relaxes the update termination conditions for faster convergence. The convergence and optimality of the algorithm are proven through Lyapunov analysis, and its effectiveness is demonstrated through simulations and real-world path-tracking tasks.
引用
收藏
页码:3299 / 3311
页数:13
相关论文
共 49 条
  • [1] Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach
    Abu-Khalaf, M
    Lewis, FL
    [J]. AUTOMATICA, 2005, 41 (05) : 779 - 791
  • [2] Allen-Zhu Z, 2019, PR MACH LEARN RES, V97
  • [3] Bartle RG., 2000, INTRO REAL ANAL
  • [4] Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation
    Beard, RW
    Saridis, GN
    Wen, JT
    [J]. AUTOMATICA, 1997, 33 (12) : 2159 - 2177
  • [5] Dierks T, 2010, P AMER CONTR CONF, P1568
  • [6] Event-Triggered Adaptive Dynamic Programming for Continuous-Time Systems With Control Constraints
    Dong, Lu
    Zhong, Xiangnan
    Sun, Changyin
    He, Haibo
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (08) : 1941 - 1952
  • [7] Du SS, 2019, 36 INT C MACHINE LEA, V97
  • [8] Adaptive dynamic programming for nonaffine nonlinear optimal control problem with state constraints
    Duan, Jingliang
    Liu, Zhengyu
    Li, Shengbo Eben
    Sun, Qi
    Jia, Zhenzhong
    Cheng, Bo
    [J]. NEUROCOMPUTING, 2022, 484 : 128 - 141
  • [9] Distributional Soft Actor-Critic: Off-Policy Reinforcement Learning for Addressing Value Estimation Errors
    Duan, Jingliang
    Guan, Yang
    Li, Shengbo Eben
    Ren, Yangang
    Sun, Qi
    Cheng, Bo
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (11) : 6584 - 6598
  • [10] Hierarchical reinforcement learning for self-driving decision-making without reliance on labelled driving data
    Duan, Jingliang
    Eben Li, Shengbo
    Guan, Yang
    Sun, Qi
    Cheng, Bo
    [J]. IET INTELLIGENT TRANSPORT SYSTEMS, 2020, 14 (05) : 297 - 305