Optimal Tracking Control of Unknown Discrete-Time Linear Systems Using Input-Output Measured Data

被引:207
作者
Kiumarsi, Bahare [1 ]
Lewis, Frank L. [1 ]
Naghibi-Sistani, Mohammad-Bagher [2 ]
Karimpour, Ali [2 ]
机构
[1] Univ Texas Arlington, UTA Res Inst, Ft Worth, TX 76118 USA
[2] Ferdowsi Univ Mashhad, Dept Elect Engn, Mashhad 9177948974, Iran
基金
美国国家科学基金会;
关键词
Approximate dynamic programming (ADP); linear quadratic tracking (LQT); reinforcement learning (RL); NONLINEAR-SYSTEMS;
D O I
10.1109/TCYB.2014.2384016
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, an output-feedback solution to the infinite-horizon linear quadratic tracking (LQT) problem for unknown discrete-time systems is proposed. An augmented system composed of the system dynamics and the reference trajectory dynamics is constructed. The state of the augmented system is constructed from a limited number of measurements of the past input, output, and reference trajectory in the history of the augmented system. A novel Bellman equation is developed that evaluates the value function related to a fixed policy by using only the input, output, and reference trajectory data from the augmented system. By using approximate dynamic programming, a class of reinforcement learning methods, the LQT problem is solved online without requiring knowledge of the augmented system dynamics only by measuring the input, output, and reference trajectory from the augmented system. We develop both policy iteration (PI) and value iteration (VI) algorithms that converge to an optimal controller that require only measuring the input, output, and reference trajectory data. The convergence of the proposed PI and VI algorithms is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
引用
收藏
页码:2770 / 2779
页数:10
相关论文
共 27 条
[1]   Discrete-time nonlinear HJB solution using approximate dynamic programming: Convergence proof [J].
Al-Tamimi, Asma ;
Lewis, Frank L. ;
Abu-Khalaf, Murad .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2008, 38 (04) :943-949
[2]   Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control [J].
Al-Tamimi, Asma ;
Lewis, Frank L. ;
Abu-Khalaf, Murad .
AUTOMATICA, 2007, 43 (03) :473-481
[3]  
[Anonymous], 1996, Neuro-dynamic programming
[4]  
BRADTKE SJ, 1994, PROCEEDINGS OF THE 1994 AMERICAN CONTROL CONFERENCE, VOLS 1-3, P3475
[5]  
Dierks T, 2010, P AMER CONTR CONF, P1568
[7]   Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics [J].
Kiumarsi, Bahare ;
Lewis, Frank L. ;
Modares, Hamidreza ;
Karimpour, Ali ;
Naghibi-Sistani, Mohammad-Bagher .
AUTOMATICA, 2014, 50 (04) :1167-1175
[8]  
Kiumarsi-Khomartash B, 2013, IEEE DECIS CONTR P, P3845, DOI 10.1109/CDC.2013.6760476
[9]  
Lancaster P., 1995, Algebraic Riccati Equations
[10]   Reinforcement Learning for Partially Observable Dynamic Processes: Adaptive Dynamic Programming Using Measured Output Data [J].
Lewis, F. L. ;
Vamvoudakis, Kyriakos G. .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2011, 41 (01) :14-25