Derivative-Free Optimization Via Proximal Point Methods

被引:11
作者
Hare, W. L. [1 ]
Lucet, Y. [2 ]
机构
[1] Univ British Columbia, Kelowna, BC V1V 1V7, Canada
[2] UBCO, Kelowna, BC, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Derivative-free optimization; Proximal point method; Unconstrained minimization; BUNDLE METHOD; NONCONVEX FUNCTIONS; MINIMIZATION; NONSMOOTH; ALGORITHM; INTERPOLATION; SMOOTHNESS; REGULARITY; GEOMETRY; SETS;
D O I
10.1007/s10957-013-0354-0
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Derivative-Free Optimization (DFO) examines the challenge of minimizing (or maximizing) a function without explicit use of derivative information. Many standard techniques in DFO are based on using model functions to approximate the objective function, and then applying classic optimization methods to the model function. For example, the details behind adapting steepest descent, conjugate gradient, and quasi-Newton methods to DFO have been studied in this manner. In this paper we demonstrate that the proximal point method can also be adapted to DFO. To that end, we provide a derivative-free proximal point (DFPP) method and prove convergence of the method in a general sense. In particular, we give conditions under which the gradient values of the iterates converge to 0, and conditions under which an iterate corresponds to a stationary point of the objective function.
引用
收藏
页码:204 / 220
页数:17
相关论文
共 32 条
[31]  
Rockafellar R. T., 1998, GRUNDLEHREN MATH WIS, V317
[32]   A proximal bundle method with inexact data for convex nondifferentiable minimization [J].
Shen, Jie ;
Xia, Zun-Quan ;
Pang, Li-Ping .
NONLINEAR ANALYSIS-THEORY METHODS & APPLICATIONS, 2007, 66 (09) :2016-2027