Stochastic perturbation of reduced gradient & GRG methods for nonconvex programming problems
被引:11
作者:
论文数: 引用数:
h-index:
机构:
El Mouatasim, Abdelkrim
[1
,2
]
Ellaia, Rachid
论文数: 0引用数: 0
h-index: 0
机构:
Mohammed V Univ Agdal, Lab Study & Res Appl Math, LERMA, Mohammadia Sch Engn, Rabat, MoroccoMohammed V Univ Agdal, Lab Study & Res Appl Math, LERMA, Mohammadia Sch Engn, Rabat, Morocco
Ellaia, Rachid
[1
]
Souza de Cursi, Eduardo
论文数: 0引用数: 0
h-index: 0
机构:
INSA Rouen, LOFIMS EA CNRS 3828, St Etienne, FranceMohammed V Univ Agdal, Lab Study & Res Appl Math, LERMA, Mohammadia Sch Engn, Rabat, Morocco
Souza de Cursi, Eduardo
[3
]
机构:
[1] Mohammed V Univ Agdal, Lab Study & Res Appl Math, LERMA, Mohammadia Sch Engn, Rabat, Morocco
[2] Univ Ibn Zohr, Fac Polydisciplinaire, Ouarzazate, Morocco
[3] INSA Rouen, LOFIMS EA CNRS 3828, St Etienne, France
In this paper, we consider nonconvex differentiable programming under linear and nonlinear differentiable constraints. A reduced gradient and GRG (generalized reduced gradient) descent methods involving stochastic perturbation are proposed and we give a mathematical result establishing the convergence to a global minimizer. Numerical examples are given in order to show that the method is effective to calculate. Namely, we consider classical tests such as the statistical problem, the octagon problem, the mixture problem and an application to the linear optimal control servomotor problem. (C) 2013 Elsevier Inc. All rights reserved.