Budget-dependent convergence rate of stochastic approximation

被引:40
作者
L'Ecuyer, P
Yin, G
机构
[1] Univ Montreal, Dept IRO, Montreal, PQ H3C 3J7, Canada
[2] Wayne State Univ, Dept Math, Detroit, MI 48202 USA
关键词
stochastic optimization; discrete-event systems; stochastic approximation; gradient estimate; rate of convergence; limit theorems;
D O I
10.1137/S1052623495270723
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Convergence rate results are derived for a stochastic optimization problem where a performance measure is minimized with respect to a vector parameter theta. Assuming that a gradient estimator is available and that both the bias and the variance of the estimator are (known) functions of the budget devoted to its computation, the gradient estimator is employed in conjunction with a stochastic approximation (SA) algorithm. Our interest is to figure out how to allocate the total available computational budget to the successive SA iterations. The effort is devoted to solving the asymptotic version of this problem by finding the convergence rate of SA toward the optimizer, first as a function of the number of iterations and then as a function of the total computational effort. As a result the optimal rate of increase of the computational budget per iteration can be found. Explicit expressions for the case where the computational budget devoted to an iteration is a polynomial in the iteration number, and where the bias and variance of the gradient estimator are polynomials of the computational budget, are derived. Applications include the optimization of steady-state simulation models with likelihood ratio, perturbation analysis, or finite-difference gradient estimators; optimization of infinite-horizon models with discounting; optimization of functions of several expectations; and so on. Several examples are discussed. Our results readily generalize to general root-finding problems.
引用
收藏
页码:217 / 247
页数:31
相关论文
共 50 条
[41]   Convergence rate for weighted polynomial approximation on the real line [J].
Kononova, Anna .
JOURNAL OF APPROXIMATION THEORY, 2021, 272
[42]   WEAK CONVERGENCE RATES OF POPULATION VERSUS SINGLE-CHAIN STOCHASTIC APPROXIMATION MCMC ALGORITHMS [J].
Song, Qifan ;
Wu, Mingqi ;
Liang, Faming .
ADVANCES IN APPLIED PROBABILITY, 2014, 46 (04) :1059-1083
[43]   CONVERGENCE RATES FOR INHOMOGENEOUS MARKOV CHAINS FROM STOCHASTIC APPROXIMATION ALGORITHMS [J].
Huang, Lu-Jing ;
Wang, Jian .
DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS-SERIES S, 2025,
[44]   Convergence of Markovian stochastic approximation for Markov random fields with hidden variables [J].
Qi, Anna ;
Yang, Lihua ;
Huang, Chao .
STOCHASTICS AND DYNAMICS, 2020, 20 (05)
[45]   Recursive least-squares and accelerated convergence in stochastic approximation schemes [J].
Ljung, L .
INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, 2001, 15 (02) :169-178
[46]   Complete convergence of stochastic approximation algorithm in Rd under random noises [J].
Arab, Idir ;
Dahmani, Abdelnasser .
SEQUENTIAL ANALYSIS-DESIGN METHODS AND APPLICATIONS, 2016, 35 (02) :216-225
[47]   Convergence of stochastic-approximation-based algorithms for blind channel identification [J].
Chen, HF ;
Cao, XR ;
Zhu, J .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2002, 48 (05) :1214-1225
[48]   Convergence Rates for Stochastic Approximation: Biased Noise with Unbounded Variance, and Applications [J].
Karandikar, Rajeeva Laxman ;
Vidyasagar, Mathukumalli .
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2024, 203 (03) :2412-2450
[49]   Accelerated stochastic approximation with state-dependent noise [J].
Ilandarideva, Sasila ;
Juditsky, Anatoli ;
Lan, Guanghui ;
Li, Tianjiao .
MATHEMATICAL PROGRAMMING, 2024, :239-280
[50]   Stochastic approximation with exciting perturbation under dependent noises [J].
Granichin, O .
CONTROL OF OSCILLATIONS AND CHAOS, VOLS 1-3, PROCEEDINGS, 2000, :146-149