A data-driven online ADP control method for nonlinear system based on policy iteration and nonlinear MIMO decoupling ADRC

被引:6
作者
Huang, Zhijian [1 ,3 ]
Zhang, Cheng [1 ]
Zhang, Yanyan [2 ]
Zhang, Guichen [1 ]
机构
[1] Shanghai Maritime Univ, Lab Intelligent Control & Computat, Shanghai 201306, Peoples R China
[2] Tongji Univ, Peoples Hosp 10, Shanghai 200072, Peoples R China
[3] Univ Rhode Isl, Dept Elect Comp & Biomed Engn, Kingston, RI 02881 USA
关键词
Data driven; Online; Approximate dynamic programming; Linear quadratic function; Least square method; Assumed neural network; ADRC; DISTURBANCE REJECTION CONTROL; TIME LINEAR-SYSTEMS; NEURAL-NETWORK; REINFORCEMENT; MANAGEMENT; STABILITY;
D O I
10.1016/j.neucom.2018.04.024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The action-critic approximate dynamic programming (ADP) depends on its network structure and training algorithm. Since there are some inherent shortcomings of the neural network, this paper proposes a data-driven nonlinear online ADP control method without the neural network. Firstly, a multi-input multi-output (MIMO) policy iteration method is utilized for the proposed ADP. For its policy evaluation, the cost function is approximated with a quadratic function and least square method; for its policy improvement, the optimal control is approximated by solving the quadratic function linearly. In this way, an optimal control equation in form of variable coefficients and system states is deduced. Secondly, a nonlinear MIMO decoupling Active Disturbance Rejection Control method is used to obtain the variable coefficients in real time, which endows the ADP method with a nonlinear performance during its policy improvement. Once the variable coefficients are determined, the data-driven nonlinear ADP control method is deduced. Finally, the examples of an under-actuated nonlinear system and a real application are taken to demonstrate the optimal control effect. Compared with some published methods and their simulation, this method and its simulation excel in the method of policy improvement, nonlinear ability and control performance etc. Thus, the proposed method explores a new way to the ADP, and overcomes the shortcomings of the neural-network-based ADP. Since it enables to work like a PID controller and does not require data collecting, training or extra learning, this proposed ADP is a real data-driven non-linear online optimal control method. (C) 2018 Elsevier B.V. All rights reserved.
引用
收藏
页码:28 / 37
页数:10
相关论文
共 41 条
  • [1] Introducing a modified gradient vector method for optimization of accident prediction non-linear functions
    Afandizadeh, Shahriar
    Ameri, Mahmud
    Moghaddam, Mohammad Hassan Mirabi
    [J]. APPLIED MATHEMATICAL MODELLING, 2011, 35 (12) : 5500 - 5506
  • [2] [Anonymous], 1957, Dynamic Programming
  • [3] Obstacle avoidance and active disturbance rejection control for a quadrotor
    Chang, Kai
    Xia, Yuanqing
    Huang, Kaoli
    Ma, Dailiang
    [J]. NEUROCOMPUTING, 2016, 190 : 60 - 69
  • [4] Decoupled, Disturbance Rejection Control for A Turbocharged Diesel Engine with Dual-loop EGR System
    Chen, Song
    Yan, Fengjun
    [J]. IFAC PAPERSONLINE, 2016, 49 (11): : 619 - 624
  • [5] Govindhasamy James J., 2005, Intelligent Control Systems using Computational Intelligence Techniques, P293, DOI 10.1049/PBCE070E_ch9
  • [6] Online Supplementary ADP Learning Controller Design and Application to Power System Frequency Control With Large-Scale Wind Energy Integration
    Guo, Wentao
    Liu, Feng
    Si, Jennie
    He, Dawei
    Harley, Ronald
    Mei, Shengwei
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (08) : 1748 - 1761
  • [7] Approximate dynamic programming based supplementary reactive power control for DFIG wind farm to enhance power system stability
    Guo, Wentao
    Liu, Feng
    Si, Jennie
    He, Dawei
    Harley, Ronald
    Mei, Shengwei
    [J]. NEUROCOMPUTING, 2015, 170 : 417 - 427
  • [8] Han J, 2009, Active disturbance rejection control technique-the technique for estimating and compensating the uncertainties
  • [9] From PID to Active Disturbance Rejection Control
    Han, Jingqing
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2009, 56 (03) : 900 - 906
  • [10] Global optimality of approximate dynamic programming and its use in non-convex function minimization
    Heydari, Ali
    Balakrishnan, S. N.
    [J]. APPLIED SOFT COMPUTING, 2014, 24 : 291 - 303