Nonlinear neuro-optimal tracking control via stable iterative Q-learning algorithm

被引:26
作者
Wei, Qinglai [1 ]
Song, Ruizhuo [2 ]
Sun, Qiuye [3 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
[2] Univ Sci & Technol Beijing, Sch Automat & Elect Engn, Beijing 100083, Peoples R China
[3] Northeastern Univ, Sch Informat Sci & Engn, Shenyang 110004, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Adaptive dynamic programming; Approximate; Dynamic programming; Q-learning; Optimal tracking control; Neural networks; DYNAMIC-PROGRAMMING ALGORITHM; CONTROL SCHEME; SYSTEMS; APPROXIMATION; GAMES;
D O I
10.1016/j.neucom.2015.05.075
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper discusses a new policy iteration Q-learning algorithm to solve the infinite horizon optimal tracking problems for a class of discrete-time nonlinear systems. The idea is to use an iterative adaptive dynamic programming (ADP) technique to construct the iterative tracking control law which makes the system state track the desired state trajectory and simultaneously minimizes the iterative Q function. Via system transformation, the optimal tracking problem is transformed into an optimal regulation problem. The policy iteration Q-learning algorithm is then developed to obtain the optimal control law for the regulation system. Initialized by an arbitrary admissible control law, the convergence property is analyzed. It is shown that the iterative Q function is monotonically non-increasing and converges to the optimal Q function. It is proven that any of the iterative control laws can stabilize the transformed nonlinear system. Two neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of policy iteration Q-learning algorithm. Finally, two simulation examples are presented to illustrate the performance of the developed algorithm. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:520 / 528
页数:9
相关论文
共 55 条
[1]   Discrete-time nonlinear HJB solution using approximate dynamic programming: Convergence proof [J].
Al-Tamimi, Asma ;
Lewis, Frank L. ;
Abu-Khalaf, Murad .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2008, 38 (04) :943-949
[2]  
[Anonymous], IEEE T SMART GRID
[3]  
[Anonymous], 1992, HDB INTELLIGENT CONT
[4]  
Beard R. W., 1995, Improving the closed-loop performance of nonlinear systems
[5]   Adaptive Dynamic Programming Algorithm for Renewable Energy Scheduling and Battery Management [J].
Boaro, Matteo ;
Fuselli, Danilo ;
De Angelis, Francesco ;
Liu, Derong ;
Wei, Qinglai ;
Piazza, Francesco .
COGNITIVE COMPUTATION, 2013, 5 (02) :264-277
[6]  
Busoniu L, 2010, AUTOM CONTROL ENG SE, P1, DOI 10.1201/9781439821091-f
[7]  
Dorf R. C., 2011, Modern Control Systems.
[8]   Action dependent heuristic dynamic programming for home energy resource scheduling [J].
Fuselli, Danilo ;
De Angelis, Francesco ;
Boaro, Matteo ;
Squartini, Stefano ;
Wei, Qinglai ;
Liu, Derong ;
Piazza, Francesco .
INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2013, 48 :148-160
[9]   A self-learning scheme for residential energy system control and management [J].
Huang, Ting ;
Liu, Derong .
NEURAL COMPUTING & APPLICATIONS, 2013, 22 (02) :259-269
[10]   Online adaptive approximate optimal tracking control with simplified dual approximation structure for continuous-time unknown nonlinear systems [J].
Na, Jing ;
Herrmann, Guido .
IEEE/CAA Journal of Automatica Sinica, 2014, 1 (04) :412-422