Adaptive dynamic programming for robust neural control of unknown continuous-time non-linear systems

被引:42
作者
Yang, Xiong [1 ,2 ]
He, Haibo [2 ]
Liu, Derong [3 ]
Zhu, Yuanheng [4 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Univ Rhode Isl, Dept Elect Comp & Biomed Engn, Kingston, RI 02881 USA
[3] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Guangdong, Peoples R China
[4] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
dynamic programming; robust control; neurocontrollers; continuous time systems; control system synthesis; nonlinear control systems; optimal control; function approximation; Monte Carlo methods; closed loop systems; asymptotic stability; adaptive dynamic programming; robust neural control design; unknown continuous-time nonlinear systems; CT nonlinear systems; ADP-based robust neural control scheme; robust nonlinear control problem; nonlinear optimal control problem; nominal system; ADP algorithm; actor-critic dual networks; control policy approximation; value function approximation; actor neural network weights; critic NN weights; Monte Carlo integration method; closed-loop system; asymptotically stability; APPROXIMATE OPTIMAL-CONTROL; POLICY ITERATION; ALGORITHM; DESIGN;
D O I
10.1049/iet-cta.2017.0154
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The design of robust controllers for continuous-time (CT) non-linear systems with completely unknown non-linearities is a challenging task. The inability to accurately identify the non-linearities online or offline motivates the design of robust controllers using adaptive dynamic programming (ADP). In this study, an ADP-based robust neural control scheme is developed for a class of unknown CT non-linear systems. To begin with, the robust non-linear control problem is converted into a non-linear optimal control problem via constructing a value function for the nominal system. Then an ADP algorithm is developed to solve the non-linear optimal control problem. The ADP algorithm employs actor-critic dual networks to approximate the control policy and the value function, respectively. Based on this architecture, only system data is necessary to update simultaneously the actor neural network (NN) weights and the critic NN weights. Meanwhile, the persistence of excitation assumption is no longer required by using the Monte Carlo integration method. The closed-loop system with unknown non-linearities is demonstrated to be asymptotically stable under the obtained optimal control. Finally, two examples are provided to validate the developed method.
引用
收藏
页码:2307 / 2316
页数:10
相关论文
共 54 条
  • [31] Powell WB, 2007, APPROXIMATE DYNAMIC PROGRAMMING: SOLVING THE CURSES OF DIMENSIONALITY, P1, DOI 10.1002/9780470182963
  • [32] Rudin W, 1991, FUNCTIONAL ANAL, V2
  • [33] Approximate Optimal Control of Affine Nonlinear Continuous-Time Systems Using Event-Sampled Neurodynamic Programming
    Sahoo, Avimanyu
    Xu, Hao
    Jagannathan, Sarangapani
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (03) : 639 - 652
  • [34] APPROXIMATION THEORY OF OPTIMAL-CONTROL FOR TRAINABLE MANIPULATORS
    SARIDIS, GN
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1979, 9 (03): : 152 - 159
  • [35] Complete stability analysis of a heuristic approximate dynamic programming control design
    Sokolov, Yury
    Kozma, Robert
    Werbos, Ludmilla D.
    Werbos, Paul J.
    [J]. AUTOMATICA, 2015, 59 : 9 - 18
  • [36] A new iterative algorithm for solving H control problem of continuous-time Markovian jumping linear systems based on online implementation
    Song, Jun
    He, Shuping
    Ding, Zhengtao
    Liu, Fei
    [J]. INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2016, 26 (17) : 3737 - 3754
  • [37] Data-driven policy iteration algorithm for optimal control of continuous-time Ito stochastic systems with Markovian jumps
    Song, Jun
    He, Shuping
    Liu, Fei
    Niu, Yugang
    Ding, Zhengtao
    [J]. IET CONTROL THEORY AND APPLICATIONS, 2016, 10 (12) : 1431 - 1439
  • [38] Stevens BL., 2015, Aircraft Control and Simulation
  • [39] Online adaptive algorithm for optimal control with integral reinforcement learning
    Vamvoudakis, Kyriakos G.
    Vrabie, Draguna
    Lewis, Frank L.
    [J]. INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2014, 24 (17) : 2686 - 2710
  • [40] Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems
    Vrabie, Draguna
    Lewis, Frank
    [J]. NEURAL NETWORKS, 2009, 22 (03) : 237 - 246