Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

被引:0
作者
Karimi, Belhal [1 ]
Miasojedow, Blazej [2 ]
Moulines, Eric [1 ]
Wai, Hoi-To [3 ]
机构
[1] Ecole Polytechn, CMAP, Palaiseau, France
[2] Univ Warsaw, Fac Math Informat & Mech, Warsaw, Poland
[3] Chinese Univ Hong Kong, Dept SEEM, Hong Kong, Peoples R China
来源
CONFERENCE ON LEARNING THEORY, VOL 99 | 2019年 / 99卷
关键词
biased stochastic approximation; state-dependent Markov chain; non-convex optimization; policy gradient; online expectation-maximization; GRADIENT; OPTIMIZATION; CONVERGENCE; ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates. We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.
引用
收藏
页数:31
相关论文
共 50 条