Event-Triggered Robust Adaptive Dynamic Programming With Output Feedback for Large-Scale Systems

被引:20
作者
Zhao, Fuyu [1 ]
Gao, Weinan [2 ]
Liu, Tengfei [3 ]
Jiang, Zhong-Ping [4 ]
机构
[1] Shandong Univ Sci & Technol, Coll Elect Engn & Automat, Qingdao 266590, Peoples R China
[2] Florida Inst Technol, Coll Engn & Sci, Dept Mech & Civil Engn, Melbourne, FL 32901 USA
[3] Northeastern Univ, State Key Lab Synthet Automat Proc Ind, Shenyang 110819, Peoples R China
[4] NYU, Dept Elect & Comp Engn, Brooklyn, NY 11201 USA
来源
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS | 2023年 / 10卷 / 01期
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Optimal control; Control systems; Adaptive systems; Large-scale systems; Dynamic programming; Power system stability; Power system dynamics; Event-triggered control; output-feedback; robust adaptive dynamic programming (RADP); small-gain theory; ZERO-SUM GAMES; LINEAR-SYSTEMS; MODEL;
D O I
10.1109/TCNS.2022.3186623
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, an event-triggered output-feedback adaptive optimal control approach is proposed for large-scale systems with parametric and dynamic uncertainties through robust adaptive dynamic programming and small-gain techniques. By using the input and output data, the unmeasurable states are reconstructed instead of designing a Luenberger observer. To save the communication resources and reduce the number of control updates, an event-based feedback control policy is learned based on policy iteration and value iteration, respectively. The closed-loop stability and the convergence of the proposed algorithms are analyzed by using Lyapunov stability theory and small-gain techniques. A practical example of multimachine power systems with governor controllers is given to demonstrate the effectiveness of the proposed methods.
引用
收藏
页码:63 / 74
页数:12
相关论文
共 39 条
[1]   Data-based optimal control [J].
Aangenent, W ;
Kostic, D ;
de Jager, B ;
van de Molengraft, R ;
Steinbuch, M .
ACC: PROCEEDINGS OF THE 2005 AMERICAN CONTROL CONFERENCE, VOLS 1-7, 2005, :1460-1465
[2]  
AGARWAL R.P., 2000, Difference Equations and Inequalities, Theory, Methods and Applications, DOI DOI 10.1201/9781420027020
[3]   Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control [J].
Al-Tamimi, Asma ;
Lewis, Frank L. ;
Abu-Khalaf, Murad .
AUTOMATICA, 2007, 43 (03) :473-481
[4]  
[Anonymous], 1974, PHD DISSERTATION
[5]  
Arzen K.-E., 1999, Proceedings of the 14th World Congress. International Federation of Automatic Control, P423
[6]  
Åström KJ, 2002, IEEE DECIS CONTR P, P2011, DOI 10.1109/CDC.2002.1184824
[7]   An IOS Small-Gain Theorem for Large-Scale Hybrid Systems [J].
Bao, Adiya ;
Liu, Tengfei ;
Jiang, Zhong-Ping .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2019, 64 (03) :1295-1300
[8]   Optimal Control of Linear Systems With Limited Control Actions: Threshold-Based Event-Triggered Control [J].
Demirel, Burak ;
Ghadimi, Euhanna ;
Quevedo, Daniel E. ;
Johansson, Mikael .
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2018, 5 (03) :1275-1286
[9]   Distributed Resilient Observer-Based Fault-Tolerant Control for Heterogeneous Multiagent Systems Under Actuator Faults and DoS Attacks [J].
Deng, Chao ;
Wen, Changyun .
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2020, 7 (03) :1308-1318
[10]   An Online Event-Triggered Near-Optimal Controller for Nash Solution in Interconnected System [J].
Dhar, Narendra Kumar ;
Verma, Nishchal Kumar ;
Behera, Laxmidhar .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (12) :5534-5548