Event-Triggered H∞ Control for Continuous-Time Nonlinear System via Concurrent Learning

被引:157
作者
Zhang, Qichao [1 ]
Zhao, Dongbin [1 ]
Zhu, Yuanheng [1 ]
机构
[1] Chinese Acad Sci, State Key Lab Management & Control Complex Syst, Inst Automat, Beijing 100190, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2017年 / 47卷 / 07期
基金
中国国家自然科学基金;
关键词
Concurrent learning; event-triggered control; H-infinity optimal control; neural networks (NNs); zero-sum (ZS) game; ZERO-SUM GAMES; STATE-FEEDBACK CONTROL; UNKNOWN DYNAMICS; ALGORITHM; EQUATION; DESIGNS;
D O I
10.1109/TSMC.2016.2531680
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, the H-infinity optimal control problem for a class of continuous-time nonlinear systems is investigated using event-triggered method. First, the H-infinity optimal control problem is formulated as a two-player zero-sum (ZS) differential game. Then, an adaptive triggering condition is derived for the ZS game with an event-triggered control policy and a time-triggered disturbance policy. The event-triggered controller is updated only when the triggering condition is not satisfied. Therefore, the communication between the plant and the controller is reduced. Furthermore, a positive lower bound on the minimal intersample time is provided to avoid Zeno behavior. For implementation purpose, the event-triggered concurrent learning algorithm is proposed, where only one critic neural network (NN) is used to approximate the value function, the control policy and the disturbance policy. During the learning process, the traditional persistence of excitation condition is relaxed using the recorded data and instantaneous data together. Meanwhile, the stability of closed-loop system and the uniform ultimate boundedness (UUB) of the critic NN's parameters are proved by using Lyapunov technique. Finally, simulation results verify the feasibility to the ZS game and the corresponding H-infinity control problem.
引用
收藏
页码:1071 / 1081
页数:11
相关论文
共 47 条
  • [1] Policy iterations on the Hamilton-Jacobi-Isaacs equation for H∞ state feedback control with input saturation
    Abu-Khalaf, Murad
    Lewis, Frank L.
    Huang, Jie
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2006, 51 (12) : 1989 - 1995
  • [2] Neurodynamic programming and zero-sum games for constrained control systems
    Abu-Khalaf, Murad
    Lewis, Frank L.
    Huang, Jie
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (07): : 1243 - 1252
  • [3] Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control
    Al-Tamimi, Asma
    Lewis, Frank L.
    Abu-Khalaf, Murad
    [J]. AUTOMATICA, 2007, 43 (03) : 473 - 481
  • [4] Adaptive critic designs for discrete-time zero-sum games with application to H∞ control
    Al-Tamimi, Asma
    Abu-Khalaf, Murad
    Lewis, Frank L.
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2007, 37 (01): : 240 - 247
  • [5] [Anonymous], H8 OPTIMAL CONTROL R
  • [6] Basar T., 1995, Dynamic Noncooperative Game Theory
  • [7] Missile defense and interceptor allocation by neuro-dynamic programming
    Bertsekas, DP
    Homer, ML
    Logan, DA
    Patek, SD
    Sandell, NR
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS, 2000, 30 (01): : 42 - 51
  • [8] Boyd S., 1994, SIAM STUDIES APPL MA
  • [9] Concurrent Learning for Convergence in Adaptive Control without Persistency of Excitation
    Chowdhary, Girish
    Johnson, Eric
    [J]. 49TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2010, : 3674 - 3679
  • [10] On H∞ Estimation of Randomly Occurring Faults for A Class of Nonlinear Time-Varying Systems With Fading Channels
    Dong, Hongli
    Wang, Zidong
    Ding, Steven X.
    Gao, Huijun
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2016, 61 (02) : 479 - 484