Output Feedback Q-Learning Control for the Discrete-Time Linear Quadratic Regulator Problem

被引:69
作者
Rizvi, Syed Ali Asad [1 ]
Lin, Zongli [1 ]
机构
[1] Univ Virginia, Charles L Brown Dept Elect & Comp Engn, Charlottesville, VA 22904 USA
关键词
Approximate dynamic programming (ADP); linear quadratic regulation (LQR); output feedback; Q-learning; reinforcement learning (RL); SYSTEMS;
D O I
10.1109/TNNLS.2018.2870075
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Approximate dynamic programming (ADP) and reinforcement learning (RL) have emerged as important tools in the design of optimal and adaptive control systems. Most of the existing RL and ADP methods make use of full-state feedback, a requirement that is often difficult to satisfy in practical applications. As a result, output feedback methods are more desirable as they relax this requirement. In this paper, we present a new output feedback-based Q-learning approach to solving the linear quadratic regulation (LQR) control problem for discrete-time systems. The proposed scheme is completely online in nature and works without requiring the system dynamics information. More specifically, a new representation of the LQR Q-function is developed in terms of the input-output data. Based on this new Q-function representation, output feedback LQR controllers are designed. We present two output feedback iterative Q-learning algorithms based on the policy iteration and the value iteration methods. This scheme has the advantage that it does not incur any excitation noise bias, and therefore, the need of using discounted cost functions is circumvented, which in turn ensures closed-loop stability. It is shown that the proposed algorithms converge to the solution of the LQR Riccati equation. A comprehensive simulation study is carried out, which illustrates the proposed scheme.
引用
收藏
页码:1523 / 1536
页数:14
相关论文
共 37 条
  • [31] Tao G., 2003, Adaptive Control Design and Analysis, V37
  • [32] Adaptive optimal control for continuous-time linear systems based on policy iteration
    Vrabie, D.
    Pastravanu, O.
    Abu-Khalaf, M.
    Lewis, F. L.
    [J]. AUTOMATICA, 2009, 45 (02) : 477 - 484
  • [33] Werbos P., 1992, Handbook of intelligent control: Neural fuzzy and adaptive approaches, P493
  • [34] Yoon SY, 2013, IEEE DECIS CONTR P, P312, DOI 10.1109/CDC.2013.6759900
  • [35] Data-Driven Robust Approximate Optimal Tracking Control for Unknown General Nonlinear Systems Using Adaptive Dynamic Programming Method
    Zhang, Huaguang
    Cui, Lili
    Zhang, Xin
    Luo, Yanhong
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (12): : 2226 - 2236
  • [36] An Event-Triggered ADP Control Approach for Continuous-Time System With Unknown Internal States
    Zhong, Xiangnan
    He, Haibo
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (03) : 683 - 694
  • [37] Adaptive Suboptimal Output-Feedback Control for Linear Systems Using Integral Reinforcement Learning
    Zhu, Lemei M.
    Modares, Hamidreza
    Peen, Gan Oon
    Lewis, Frank L.
    Yue, Baozeng
    [J]. IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2015, 23 (01) : 264 - 273