Data-Driven H∞ Control of Networked Nonlinear Systems With External Disturbances and Random Communication Packet Losses

被引:7
作者
Jiang, Yi [1 ,2 ]
Xie, Shengli [3 ]
Chen, Guanrong [1 ,2 ]
机构
[1] City Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
[2] City Univ Hong Kong, Ctr Complex & Complex Networks, Hong Kong, Peoples R China
[3] Guangdong Univ Technol, Sch Automat, Guangdong HongKong Macao Joint Lab Smart Discrete, Guangzhou 510006, Peoples R China
来源
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS | 2024年 / 11卷 / 03期
基金
中国国家自然科学基金;
关键词
H-infinity control; adaptive/approximate dynamic programming (ADP); nonlinear systems; packet loss; ZERO-SUM GAMES; TIME-SYSTEMS; FEEDBACK;
D O I
10.1109/TCNS.2023.3338242
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This article investigates the H-infinity control problem for partially unknown discrete-time nonlinear systems with external disturbances and Bernoulli model-based random packet losses in different communication channels. Based on the game theory, the computed control input and external disturbances are, respectively, considered as the minimizing and maximizing players for satisfying an H-infinity control performance index of the concerned networked nonlinear system. Then, a Bernoulli model-based stochastic zero-sum game is formulated and a Bernoulli model-based Hamilton-Jacobi-Isaacs equation is established. It is proven that the solutions to the developed equation result in a globally stochastically asymptotically stable closed-loop system when external disturbances are not taken into account and, if accounted for, the H-infinity control performance index is satisfied for all kinds of deterministic square-summable external disturbances. An adaptive/approximate dynamic programming and reinforcement leaning-based data-driven value iteration (VI) algorithm is developed to approximately solve the associated equation and learn the ideal feedback policy for the H-infinity control problem with guaranteed convergence. Finally, a simulation study on the proposed data-driven VI algorithm is provided to demonstrate its effectiveness.
引用
收藏
页码:1358 / 1369
页数:12
相关论文
共 41 条
[1]   Neurodynamic programming and zero-sum games for constrained control systems [J].
Abu-Khalaf, Murad ;
Lewis, Frank L. ;
Huang, Jie .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (07) :1243-1252
[2]   Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control [J].
Al-Tamimi, Asma ;
Lewis, Frank L. ;
Abu-Khalaf, Murad .
AUTOMATICA, 2007, 43 (03) :473-481
[3]   LQ Secure Control for Cyber-Physical Systems Against Sparse Sensor and Actuator Attacks [J].
An, Liwei ;
Yang, Guang-Hong .
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2019, 6 (02) :833-841
[4]  
Astrom K. J., 2013, Adaptive control
[5]  
Basar T., 2008, Hoo Optimal Control and Related Minimax Design Problems
[6]  
Basar T, 1998, Dynamic Noncooperative Game Theory, V2nd
[7]  
Chen B. M., 2013, Robust and Hoo Control
[8]   STATE-SPACE SOLUTIONS TO STANDARD H-2 AND H-INFINITY CONTROL-PROBLEMS [J].
DOYLE, JC ;
GLOVER, K ;
KHARGONEKAR, PP ;
FRANCIS, BA .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1989, 34 (08) :831-847
[9]   Online Solution of Two-Player Zero-Sum Games for Continuous-Time Nonlinear Systems With Completely Unknown Dynamics [J].
Fu, Yue ;
Chai, Tianyou .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (12) :2577-2587
[10]   Reinforcement Learning-Based Cooperative Optimal Output Regulation via Distributed Adaptive Internal Model [J].
Gao, Weinan ;
Mynuddin, Mohammed ;
Wunsch, Donald C. ;
Jiang, Zhong-Ping .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (10) :5229-5240