DIRECT SEARCH BASED ON PROBABILISTIC DESCENT

被引:51
作者
Gratton, S. [1 ]
Royer, C. W. [1 ]
Vicente, L. N. [2 ]
Zhang, Z. [3 ]
机构
[1] ENSEEIHT, INPT, F-31071 Toulouse 7, France
[2] Univ Coimbra, Dept Math, CMUC, P-3001501 Coimbra, Portugal
[3] CERFACS IRIT Joint Lab, F-31000 Toulouse, France
关键词
derivative-free optimization; direct-search methods; polling; positive spanning sets; probabilistic descent; random directions; CONVERGENCE; OPTIMIZATION;
D O I
10.1137/140961602
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Direct-search methods are a class of popular derivative-free algorithms characterized by evaluating the objective function using a step size and a number of (polling) directions. When applied to the minimization of smooth functions, the polling directions are typically taken from positive spanning sets, which in turn must have at least n + 1 vectors in an n-dimensional variable space. In addition, to ensure the global convergence of these algorithms, the positive spanning sets used throughout the iterations are required to be uniformly nondegenerate in the sense of having a positive (cosine) measure bounded away from zero. However, recent numerical results indicated that randomly generating the polling directions without imposing the positive spanning property can improve the performance of these methods, especially when the number of directions is chosen as considerably less than n + 1. In this paper, we analyze direct-search algorithms when the polling directions are probabilistic descent, meaning that with a certain probability at least one of them is of descent type. Such a framework enjoys almost-sure global convergence. More interestingly, we will show a global decaying rate of 1/root k for the gradient size, with overwhelmingly high probability, matching the corresponding rate for the deterministic versions of the gradient method or of direct search. Our analysis helps us understand numerical behavior and the choice of the number of polling directions.
引用
收藏
页码:1515 / 1541
页数:27
相关论文
共 29 条
[11]   A derivative-free nonmonotone line-search technique for unconstrained optimization [J].
Diniz-Ehrhardt, M. A. ;
Martinez, J. M. ;
Raydan, M. .
JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2008, 219 (02) :383-397
[12]  
Diouane Y., MATH PROGRA IN PRESS
[13]  
Dodangeh M., MATH PROGRA IN PRESS
[14]  
Doerr B., 2011, THEORY RANDOMIZED SE, P120, DOI DOI 10.1142/7438/SUPPL_FILE/7438_CHAP0L.PDF
[15]   On the local convergence of pattern search [J].
Dolan, ED ;
Lewis, RM ;
Torczon, V .
SIAM JOURNAL ON OPTIMIZATION, 2003, 14 (02) :567-583
[16]  
Dubhashi DP, 2009, CONCENTRATION OF MEASURE FOR THE ANALYSIS OF RANDOMIZED ALGORITHMS, P1, DOI 10.1017/CBO9780511581274
[17]  
Durrett R., 2010, CAMB SER STAT PROB M
[18]   CUTEr and SifDec: a constrained and unconstrained testing environment, revisited [J].
Gould, NIM ;
Orban, D ;
Toint, PL .
ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 2003, 29 (04) :373-394
[19]   A MERIT FUNCTION APPROACH FOR DIRECT SEARCH [J].
Gratton, S. ;
Vicente, L. N. .
SIAM JOURNAL ON OPTIMIZATION, 2014, 24 (04) :1980-1998
[20]   Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation [J].
Hansen, M ;
Ostermeier, A .
1996 IEEE INTERNATIONAL CONFERENCE ON EVOLUTIONARY COMPUTATION (ICEC '96), PROCEEDINGS OF, 1996, :312-317