Budget-dependent convergence rate of stochastic approximation

被引:39
|
作者
L'Ecuyer, P
Yin, G
机构
[1] Univ Montreal, Dept IRO, Montreal, PQ H3C 3J7, Canada
[2] Wayne State Univ, Dept Math, Detroit, MI 48202 USA
关键词
stochastic optimization; discrete-event systems; stochastic approximation; gradient estimate; rate of convergence; limit theorems;
D O I
10.1137/S1052623495270723
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Convergence rate results are derived for a stochastic optimization problem where a performance measure is minimized with respect to a vector parameter theta. Assuming that a gradient estimator is available and that both the bias and the variance of the estimator are (known) functions of the budget devoted to its computation, the gradient estimator is employed in conjunction with a stochastic approximation (SA) algorithm. Our interest is to figure out how to allocate the total available computational budget to the successive SA iterations. The effort is devoted to solving the asymptotic version of this problem by finding the convergence rate of SA toward the optimizer, first as a function of the number of iterations and then as a function of the total computational effort. As a result the optimal rate of increase of the computational budget per iteration can be found. Explicit expressions for the case where the computational budget devoted to an iteration is a polynomial in the iteration number, and where the bias and variance of the gradient estimator are polynomials of the computational budget, are derived. Applications include the optimization of steady-state simulation models with likelihood ratio, perturbation analysis, or finite-difference gradient estimators; optimization of infinite-horizon models with discounting; optimization of functions of several expectations; and so on. Several examples are discussed. Our results readily generalize to general root-finding problems.
引用
收藏
页码:217 / 247
页数:31
相关论文
共 50 条