Asymptotic behavior of a Markovian stochastic algorithm with constant step

被引:16
作者
Fort, JC
Pagès, G
机构
[1] Univ Nancy 1, Inst E Cartan, F-54506 Vandoeuvre Nancy, France
[2] Univ Paris 01, SAMOS, F-75634 Paris 13, France
[3] Univ Paris 06, Probabil Lab, URA 224, F-75252 Paris 05, France
[4] Univ Paris 12, UFR Sci, F-94010 Creteil, France
关键词
stochastic algorithm; stationary distribution; Markov chain; gradient descent; saddle point;
D O I
10.1137/S0363012997328610
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We first derive from abstract results on Feller transition kernels that, under some mild assumptions, a Markov stochastic algorithm with constant step size epsilon usually has a tight family of invariant distributions nu(epsilon), epsilon is an element of (0; epsilon(0)], whose weak limiting distributions as epsilon down arrow 0 are all flow-invariant for its ODE. Then the main part of the paper deals with a kind of converse: what are the possible limiting distributions among all flow-invariant distributions of the ODE? We first show that no repulsive invariant (thin) set can belong to their supports. When the ODE is a stochastic pseudogradient descent, these supports cannot contain saddle or spurious equilibrium points either, so that they are eventually supported by the set of local minima of their potential. Such results require only the random perturbation to lie in L-2. Various examples are treated, showing that these results yield some weak convergence results for the nu(epsilon)'s, sometimes toward a saddle point when the algorithm is not a pseudogradient.
引用
收藏
页码:1456 / 1482
页数:27
相关论文
共 29 条