Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation

被引:192
作者
Li, Shuai [1 ]
Li, Yangming [2 ]
机构
[1] Stevens Inst Technol, Dept Elect & Comp Engn, Hoboken, NJ 07030 USA
[2] Chinese Acad Sci, Inst Intelligent Machines, Robot Sensor & Human Machine Interact Lab, Hefei 230031, Peoples R China
关键词
Complex-valued recurrent neural network; complex-valued Sylvester equation; finite-time convergence; sign-bi-power function; LOCALIZATION;
D O I
10.1109/TCYB.2013.2285166
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n(3)) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.
引用
收藏
页码:1397 / 1407
页数:11
相关论文
共 39 条
[1]  
[Anonymous], 2002, NONLINEAR SYSTEMS
[2]  
[Anonymous], 2009, COMPLEX VALUED NONLI
[3]   ALGORITHM - SOLUTION OF MATRIX EQUATION AX+XB = C [J].
BARTELS, RH ;
STEWART, GW .
COMMUNICATIONS OF THE ACM, 1972, 15 (09) :820-&
[4]  
Buss S.R., 2004, IEEE T ROBOTIC AUTOM
[5]  
Campbell SL, 2009, GEN INVERSES LINEAR
[6]   Gradient based iterative algorithms for solving a class of matrix equations [J].
Ding, F ;
Chen, TW .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2005, 50 (08) :1216-1221
[7]  
HAFIZ AR, 2011, P ICONIP 11, V7062, P541
[8]  
Harker M., 2011, 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P2529, DOI 10.1109/CVPR.2011.5995427
[9]  
Hirose A., 2006, COMPLEX VALUED NEURA
[10]  
Hirose A., P IJCNN 09, P1237