Adaptive optimal output regulation of linear discrete-time systems based on event-triggered output-feedback

被引:32
作者
Zhao, Fuyu [1 ]
Gao, Weinan [2 ]
Liu, Tengfei [1 ]
Jiang, Zhong-Ping [3 ]
机构
[1] Northeastern Univ, State Key Lab Synthet Automat Proc Ind, Shenyang 110004, Peoples R China
[2] Florida Inst Technol, Coll Engn & Sci, Dept Mech & Civil Engn, Melbourne, FL 32901 USA
[3] NYU, 6 Metrotech Ctr, Dept Elect & Comp Engn, Brooklyn, NY 11201 USA
基金
美国国家科学基金会; 日本学术振兴会; 中国国家自然科学基金; 澳大利亚研究理事会;
关键词
Adaptive dynamic programming; Output regulation; Event-triggered control; OPTIMAL TRACKING CONTROL; NONLINEAR-SYSTEMS; STABILITY; ERROR; MODEL; GAIN;
D O I
10.1016/j.automatica.2021.110103
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents novel event-triggered control approaches to solve the adaptive optimal output regulation problem for a class of linear discrete-time systems. Different from most existing research on output regulation problems, the developed adaptive optimal control approaches are based on (1) output-feedback instead of full-state or partial-state feedback, (2) adaptive dynamic programming (ADP) which provides approximate solutions of the optimal control problem without requiring the precise knowledge of the plant dynamics, and (3) an event-triggering mechanism that reduces the communication between the controller and the plant. It is shown that the system in closed-loop with the developed controllers is asymptotically stable at an equilibrium of interest, and the tracking errors asymptotically converge to zero. Moreover, the suboptimality of the closed-loop system is directly determined by the relative threshold, which is a ratio between the triggering threshold and the actual state. A numerical simulation example is employed to verify the effectiveness of the proposed methodologies. (C) 2021 Elsevier Ltd. All rights reserved.
引用
收藏
页数:10
相关论文
共 52 条
[1]   Data-based optimal control [J].
Aangenent, W ;
Kostic, D ;
de Jager, B ;
van de Molengraft, R ;
Steinbuch, M .
ACC: PROCEEDINGS OF THE 2005 AMERICAN CONTROL CONFERENCE, VOLS 1-7, 2005, :1460-1465
[2]   A Modified Stationary Reference Frame-Based Predictive Current Control With Zero Steady-State Error for LCL Coupled Inverter-Based Distributed Generation Systems [J].
Ahmed, Khaled H. ;
Massoud, Ahmed M. ;
Finney, Stephen J. ;
Williams, Barry W. .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2011, 58 (04) :1359-1370
[3]  
Al-Tamimi A., 2007, MODEL FREE Q LEARNIN, P8
[4]  
[Anonymous], 1974, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Science
[5]  
[Anonymous], 2012, Reinforcement Learning and Approximate Dynamic Programming for Feedback Control
[6]  
Åström KJ, 2002, IEEE DECIS CONTR P, P2011, DOI 10.1109/CDC.2002.1184824
[7]   Event-Triggered Suboptimal Tracking Controller Design for a Class of Nonlinear Discrete-Time Systems [J].
Batmani, Yazdan ;
Davoodi, Mohammadreza ;
Meskin, Nader .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2017, 64 (10) :8079-8087
[8]  
Berglind J. B., 2012, IFAC- PapersOnLine, V45, P342, DOI DOI 10.3182/20120823-5-NL-3013.00058
[9]   Reinforcement Learning and Adaptive Optimal Control for Continuous-Time Nonlinear Systems: A Value Iteration Approach [J].
Bian, Tao ;
Jiang, Zhong-Ping .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (07) :2781-2790
[10]   CONTINUOUS-TIME ROBUST DYNAMIC PROGRAMMING [J].
Bian, Tao ;
Jiang, Zhong-Ping .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2019, 57 (06) :4150-4174