Adaptive dynamic programming for robust neural control of unknown continuous-time non-linear systems

被引:42
作者
Yang, Xiong [1 ,2 ]
He, Haibo [2 ]
Liu, Derong [3 ]
Zhu, Yuanheng [4 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Univ Rhode Isl, Dept Elect Comp & Biomed Engn, Kingston, RI 02881 USA
[3] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Guangdong, Peoples R China
[4] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
dynamic programming; robust control; neurocontrollers; continuous time systems; control system synthesis; nonlinear control systems; optimal control; function approximation; Monte Carlo methods; closed loop systems; asymptotic stability; adaptive dynamic programming; robust neural control design; unknown continuous-time nonlinear systems; CT nonlinear systems; ADP-based robust neural control scheme; robust nonlinear control problem; nonlinear optimal control problem; nominal system; ADP algorithm; actor-critic dual networks; control policy approximation; value function approximation; actor neural network weights; critic NN weights; Monte Carlo integration method; closed-loop system; asymptotically stability; APPROXIMATE OPTIMAL-CONTROL; POLICY ITERATION; ALGORITHM; DESIGN;
D O I
10.1049/iet-cta.2017.0154
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The design of robust controllers for continuous-time (CT) non-linear systems with completely unknown non-linearities is a challenging task. The inability to accurately identify the non-linearities online or offline motivates the design of robust controllers using adaptive dynamic programming (ADP). In this study, an ADP-based robust neural control scheme is developed for a class of unknown CT non-linear systems. To begin with, the robust non-linear control problem is converted into a non-linear optimal control problem via constructing a value function for the nominal system. Then an ADP algorithm is developed to solve the non-linear optimal control problem. The ADP algorithm employs actor-critic dual networks to approximate the control policy and the value function, respectively. Based on this architecture, only system data is necessary to update simultaneously the actor neural network (NN) weights and the critic NN weights. Meanwhile, the persistence of excitation assumption is no longer required by using the Monte Carlo integration method. The closed-loop system with unknown non-linearities is demonstrated to be asymptotically stable under the obtained optimal control. Finally, two examples are provided to validate the developed method.
引用
收藏
页码:2307 / 2316
页数:10
相关论文
共 54 条
  • [51] Data-Driven Robust Approximate Optimal Tracking Control for Unknown General Nonlinear Systems Using Adaptive Dynamic Programming Method
    Zhang, Huaguang
    Cui, Lili
    Zhang, Xin
    Luo, Yanhong
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (12): : 2226 - 2236
  • [52] An Event-Triggered ADP Control Approach for Continuous-Time System With Unknown Internal States
    Zhong, Xiangnan
    He, Haibo
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (03) : 683 - 694
  • [53] Iterative Adaptive Dynamic Programming for Solving Unknown Nonlinear Zero-Sum Game Based on Online Data
    Zhu, Yuanheng
    Zhao, Dongbin
    Li, Xiangjun
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (03) : 714 - 725
  • [54] Using reinforcement learning techniques to solve continuous-time non-linear optimal tracking problem without system dynamics
    Zhu, Yuanheng
    Zhao, Dongbin
    Li, Xiangjun
    [J]. IET CONTROL THEORY AND APPLICATIONS, 2016, 10 (12) : 1339 - 1347