A conjecture on global optimization using gradient-free stochastic approximation

被引:0
作者
Maryak, JL [1 ]
Chin, DC [1 ]
机构
[1] Johns Hopkins Univ, Appl Phys Lab, Laurel, MD 20723 USA
来源
JOINT CONFERENCE ON THE SCIENCE AND TECHNOLOGY OF INTELLIGENT SYSTEMS | 1998年
关键词
stochastic optimization; global convergence; stochastic approximation; simultaneous perturbation stochastic approximation; recursive annealing;
D O I
10.1109/ISIC.1998.713702
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A concern with iterative optimization techniques is that the algorithm reach the global optimum rather than get stranded at a local optimum value. One method used to try to assure global convergence is the injection of extra noise terms into the recursion, which may allow the algorithm to escape local optimum points. The amplitude of the injected noise is decreased over time (a process called "annealing"), so that the algorithm can finally converge when it reaches the global optimum point. In this context, we examine the performance of a certain "gradient free" stochastic approximation algorithm. We argue that, in some cases, the naturally occurring error in the gradient approximation effectively introduces injected noise that promotes convergence of the algorithm to a global optimum. The discussion is supported by a numerical study.
引用
收藏
页码:441 / 445
页数:5
相关论文
empty
未找到相关数据