Design of asymptotic estimators: An approach based on neural networks and nonlinear programming

被引:32
作者
Alessandri, Angelo [1 ]
Cervellera, Cristiano
Sanguineti, Marcello
机构
[1] Univ Genoa, DIPTEM, I-16129 Genoa, Italy
[2] CNR, ISSIA, I-16149 Genoa, Italy
[3] Univ Genoa, DIST, I-16145 Genoa, Italy
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2007年 / 18卷 / 01期
关键词
feedforward neural networks; Lyapunov function; offline optimization; penalty function; quasi-random sequences; state observer;
D O I
10.1109/TNN.2006.883015
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A methodology to design state estimators for a class of nonlinear continuous-time dynamic systems that is based on neural networks and nonlinear programming is proposed. The estimator has the structure of a Luenberger observer with a linear gain and a parameterized (in general, nonlinear) function, whose argument is an innovation term representing the difference between the current measurement and its prediction. The problem of the estimator design consists in finding the values of the gain and of the parameters that guarantee the asymptotic stability of the estimation error. Toward this end, if a neural network is used to take on this function, the parameters (i.e., the neural weights) are chosen, together with the gain, by. constraining the derivative of a quadratic Lyapunov function for the estimation error to be negative definite on a given compact set. It is proved that it is sufficient to impose the negative definiteness of such a derivative only on a suitably dense grid of sampling points. The gain is determined by solving a Lyapunov equation. The neural weights are searched for via nonlinear programming by minimizing a cost penalizing grid-point constraints that are not satisfied. Techniques based on low-discrepancy sequences are applied to deal with a small number of sampling points, and, hence, to reduce the computational burden required to optimize the parameters. Numerical results are reported and comparisons with those obtained by the extended Kalman filter are made.
引用
收藏
页码:86 / 96
页数:11
相关论文
共 44 条
[1]   Optimization of approximating networks for optimal fault diagnosis [J].
Alessandri, A ;
Sanguineti, M .
OPTIMIZATION METHODS & SOFTWARE, 2005, 20 (2-3) :235-260
[2]  
Alessandri A, 2004, P AMER CONTR CONF, P2433
[3]   Optimization-based learning with bounded error for feedforward neural networks [J].
Alessandri, A ;
Sanguineti, M ;
Maggiore, M .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (02) :261-273
[4]   A neural state estimator with bounded errors for nonlinear systems [J].
Alessandri, A ;
Baglietto, M ;
Parisini, T ;
Zoppoli, R .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1999, 44 (11) :2028-2042
[5]   Neural approximators for nonlinear finite-memory state estimation [J].
Alessandri, A ;
Parisini, T ;
Zoppoli, R .
INTERNATIONAL JOURNAL OF CONTROL, 1997, 67 (02) :275-301
[6]  
[Anonymous], 1994, STAT NEURAL NETWORKS
[7]  
Bertsekas DP, 2003, NONLINEAR PROGRAMMIN
[8]   Convergence analysis of the extended Kalman filter used as an observer for nonlinear deterministic discrete-time systems [J].
Boutayeb, M ;
Rafaralahy, H ;
Darouach, M .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1997, 42 (04) :581-586
[9]   Deterministic design for neural network learning: An approach based on discrepancy [J].
Cervellera, C ;
Muselli, M .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2004, 15 (03) :533-544
[10]  
Choi JY, 2001, IEEE T NEURAL NETWOR, V12, P1103, DOI 10.1109/72.950139