Observer Based Event-triggered Fault Compensation Control for Nonlinear Systems via Adaptive Dynamic Programming

被引:0
作者
Luo, Fangchao [1 ]
Zhao, Bo [2 ]
Liu, Derong [1 ]
机构
[1] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Peoples R China
[2] Beijing Normal Univ, Sch Syst Sci, Beijing 100875, Peoples R China
来源
2020 10TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND TECHNOLOGY (ICIST) | 2020年
基金
中国国家自然科学基金;
关键词
Adaptive dynamic programming; Event-triggered mechanism; Adaptive fault compensation; Optimal control; Neural network; TOLERANT CONTROL; ROBUST; STABILIZATION; PERFORMANCE; ALGORITHMS;
D O I
10.1109/icist49303.2020.9202005
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper develops an observer based event-triggered fault compensation control method for a class of nonlinear continuous-time systems with actuator failures via adaptive dynamic programming (ADP). The proposed control method incorporates an event-triggered optimal control policy and an adaptive observer-based compensator. Due to the excellent estimation performance of neural networks, the actuator failure is precisely estimated by the online learning observer. By combining the event-triggered mechanism with ADP, the eventtriggered optimal control policy is obtained by adopting the critic neural network to solve Hamilton-Jacobi-Bellman equation. The triggering condition is provided along with the stability analysis of the close-loop system. Finally, the simulation of single link robot arm system is presented to confirm the efficiency of the proposed fault compensation control method.
引用
收藏
页码:139 / 144
页数:6
相关论文
共 24 条
[1]   A Survey of Fault Detection, Isolation, and Reconfiguration Methods [J].
Hwang, Inseok ;
Kim, Sungwan ;
Kim, Youdan ;
Seah, Chze Eng .
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2010, 18 (03) :636-653
[2]   Iterative ADP learning algorithms for discrete-time multi-player games [J].
Jiang, He ;
Zhang, Huaguang .
ARTIFICIAL INTELLIGENCE REVIEW, 2018, 50 (01) :75-91
[3]   Robust Adaptive Dynamic Programming and Feedback Stabilization of Nonlinear Systems [J].
Jiang, Yu ;
Jiang, Zhong-Ping .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 25 (05) :882-893
[4]  
Liu D, 2017, ADV IND CONTROL, P1, DOI 10.1007/978-3-319-50815-3
[5]   Residential Energy Scheduling for Variable Weather Solar Energy Based on Adaptive Dynamic Programming [J].
Liu, Derong ;
Xu, Yancai ;
Wei, Qinglai ;
Liu, Xinliang .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2018, 5 (01) :36-46
[6]   Decentralized Stabilization for a Class of Continuous-Time Nonlinear Interconnected Systems Using Online Learning Optimal Control Approach [J].
Liu, Derong ;
Wang, Ding ;
Li, Hongliang .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 25 (02) :418-428
[7]   Balancing Value Iteration and Policy Iteration for Discrete-Time Control [J].
Luo, Biao ;
Yang, Yin ;
Wu, Huai-Ning ;
Huang, Tingwen .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2020, 50 (11) :3948-3958
[8]   Event-triggered optimal adaptive control algorithm for continuous-time nonlinear systems [J].
Vamvoudakis, Kyriakos G. .
IEEE/CAA Journal of Automatica Sinica, 2014, 1 (03) :282-293
[9]   Model-free event-triggered control algorithm for continuous-time linear systems with optimal performance [J].
Vamvoudakis, Kyriakos G. ;
Ferraz, Henrique .
AUTOMATICA, 2018, 87 :412-420
[10]   Event-Based Constrained Robust Control of Affine Systems Incorporating an AdaptiveCritic Mechanism [J].
Wang, Ding ;
Mu, Chaoxu ;
Yang, Xiong ;
Liu, Derong .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2017, 47 (07) :1602-1612