H∞ Tracking Control for Linear Discrete-Time Systems: Model-Free Q-Learning Designs

被引:39
|
作者
Yang, Yunjie [1 ]
Wan, Yan [2 ]
Zhu, Jihong [1 ]
Lewis, Frank L. [3 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
[2] Univ Texas Arlington, Dept Elect Engn, Arlington, TX 76019 USA
[3] Univ Texas Arlington, UTA Res Inst, Ft Worth, TX 75052 USA
来源
IEEE CONTROL SYSTEMS LETTERS | 2021年 / 5卷 / 01期
基金
中国国家自然科学基金;
关键词
Linear discrete-time systems; H-infinity tracking control; Q-learning; ZERO-SUM GAMES;
D O I
10.1109/LCSYS.2020.3001241
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this letter, a novel model-free Q-learning based approach is developed to solve the H-infinity tracking problem for linear discrete-time systems. A new exponential discounted value function is introduced that includes the cost of the whole control input and tracking error. The tracking Bellman equation and the game algebraic Riccati equation (GARE) are derived. The solution to the GARE leads to the feedback and feedforward parts of the control input. A Q-learning algorithm is then developed to learn the solution of the GARE online without requiring any knowledge of the system dynamics. Convergence of the algorithm is analyzed, and it is also proved that probing noises in maintaining the persistence of excitation (PE) condition do not result in any bias. An example of the F-16 aircraft short period dynamics is developed to validate the proposed algorithm.
引用
收藏
页码:175 / 180
页数:6
相关论文
共 50 条
  • [41] Output Feedback Reinforcement Q-Learning Control for the Discrete-Time Linear Quadratic Regulator Problem
    Rizvi, Syed Ali Asad
    Lin, Zongli
    2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2017,
  • [42] Finite-horizon H∞ tracking control for discrete-time linear systems
    Wang, Jian
    Wang, Wei
    Liang, Xiaofeng
    Zuo, Chao
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2024, 34 (01) : 54 - 70
  • [43] FINITE-HORIZON OPTIMAL CONTROL OF DISCRETE-TIME LINEAR SYSTEMS WITH COMPLETELY UNKNOWN DYNAMICS USING Q-LEARNING
    Zhao, Jingang
    Zhang, Chi
    JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION, 2021, 17 (03) : 1471 - 1483
  • [44] Discrete-Time Optimal Control Scheme Based on Q-Learning Algorithm
    Wei, Qinglai
    Liu, Derong
    Song, Ruizhuo
    2016 SEVENTH INTERNATIONAL CONFERENCE ON INTELLIGENT CONTROL AND INFORMATION PROCESSING (ICICIP), 2016, : 125 - 130
  • [45] Discrete-Time Deterministic Q-Learning: A Novel Convergence Analysis
    Wei, Qinglai
    Lewis, Frank L.
    Sun, Qiuye
    Yan, Pengfei
    Song, Ruizhuo
    IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (05) : 1224 - 1237
  • [46] Q-learning solution for optimal consensus control of discrete-time multiagent systems using reinforcement learning
    Mu, Chaoxu
    Zhao, Qian
    Gao, Zhongke
    Sun, Changyin
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2019, 356 (13): : 6946 - 6967
  • [47] Off-policy Q-learning-based Tracking Control for Stochastic Linear Discrete-Time Systems
    Liu, Xuantong
    Zhang, Lei
    Peng, Yunjian
    2022 4TH INTERNATIONAL CONFERENCE ON CONTROL AND ROBOTICS, ICCR, 2022, : 252 - 256
  • [48] Quantized measurements in Q-learning based model-free optimal control
    Tiistola, Sini
    Ritala, Risto
    Vilkko, Matti
    IFAC PAPERSONLINE, 2020, 53 (02): : 1640 - 1645
  • [49] An iterative Q-learning scheme for the global stabilization of discrete-time linear systems subject to actuator saturation
    Rizvi, Syed Ali Asad
    Lin, Zongli
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2019, 29 (09) : 2660 - 2672
  • [50] Finite-horizon Q-learning for discrete-time zero-sum games with application to H∞$$ {H}_{\infty } $$ control
    Liu, Mingxiang
    Cai, Qianqian
    Meng, Wei
    Li, Dandan
    Fu, Minyue
    ASIAN JOURNAL OF CONTROL, 2023, 25 (04) : 3160 - 3168