Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality

被引:88
作者
Forti, Mauro
Nistri, Paolo
Quincampoix, Marc
机构
[1] Univ Siena, Dipartimento Ingn Informaz, I-53100 Siena, Italy
[2] Univ Bretagne Occidentale, Math Lab, F-29285 Brest, France
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2006年 / 17卷 / 06期
关键词
convergence in finite time; exponential convergence; generalized gradient; Lojasiewicz inequality; neural networks (NNs); programming problems;
D O I
10.1109/TNN.2006.879775
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: 1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; 2) for convex QP problems, the same result as in 1) holds; moreover, the singleton belongs to the set of global minimizers; and 3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points.
引用
收藏
页码:1471 / 1486
页数:16
相关论文
共 29 条
[1]  
Aubin J. P., 1990, Set-valued analysis, DOI 10.1007/978-0-8176-4848-0
[2]  
Aubin J.-P., 1984, DIFFERENTIAL INCLUSI, V264
[3]  
BACCIOTTI A, 2003, ABSTR APPL ANAL, P1159
[4]   NECESSARY AND SUFFICIENT CONDITIONS FOR A PENALTY METHOD TO BE EXACT [J].
BERTSEKAS, DP .
MATHEMATICAL PROGRAMMING, 1975, 9 (01) :87-99
[5]  
BOLTE J, 2006, UNPUB J MATH ANAL AP
[6]   An analysis of a class of neural networks for solving linear programming problems [J].
Chong, EKP ;
Hui, S ;
Zak, SH .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1999, 44 (11) :1995-2006
[7]   CELLULAR NEURAL NETWORKS - THEORY [J].
CHUA, LO ;
YANG, L .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, 1988, 35 (10) :1257-1272
[8]  
Cichocki A., 1993, Neural Networks for Optimization and Signal Processing
[9]  
Ekeland I., 1976, CONVEX ANAL VARIATIO
[10]   Solving systems of linear equations via gradient systems with discontinuous righthand sides: Application to LS-SVM [J].
Ferreira, LV ;
Kaszkurewicz, E ;
Bhaya, A .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2005, 16 (02) :501-505