Finite-Time L∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_\infty $$\end{document} Performance State Estimation of Recurrent Neural Networks with Sampled-Data Signals

被引:0
作者
N. Gunasekaran
M. Syed Ali
S. Pavithra
机构
[1] Shibaura Institute of Technology,Department of Mathematical Sciences
[2] Thiruvalluvar University,Department of Mathematics
关键词
performance; Recurrent neural networks; Finite-time stabilization; Nonuniform sampling; Linear matrix inequalities (LMIs);
D O I
10.1007/s11063-019-10114-9
中图分类号
学科分类号
摘要
This paper, by proposing a sampled-data control scheme, we investigate the finite-time L∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_\infty $$\end{document} performance state estimation of recurrent neural networks. By constructing a novel Lyapunov functional, new stability and stabilization conditions are derived. By utilizing integral inequality techniques, sufficient LMI conditions are derived to ensure the finite-time stability of considered neural networks. Furthermore, finite-time observer gain analysis of recurrent neural networks is set up to measure its disturbance tolerance capability in the fixed time interval. Numerical examples are given to verify the effectiveness of the proposed approach.
引用
收藏
页码:1379 / 1392
页数:13
相关论文
共 50 条