Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

被引:0
作者
Karimi, Belhal [1 ]
Miasojedow, Blazej [2 ]
Moulines, Eric [1 ]
Wai, Hoi-To [3 ]
机构
[1] Ecole Polytechn, CMAP, Palaiseau, France
[2] Univ Warsaw, Fac Math Informat & Mech, Warsaw, Poland
[3] Chinese Univ Hong Kong, Dept SEEM, Hong Kong, Peoples R China
来源
CONFERENCE ON LEARNING THEORY, VOL 99 | 2019年 / 99卷
关键词
biased stochastic approximation; state-dependent Markov chain; non-convex optimization; policy gradient; online expectation-maximization; GRADIENT; OPTIMIZATION; CONVERGENCE; ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates. We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.
引用
收藏
页数:31
相关论文
共 50 条
[31]   Non-asymptotic Convergence of Adam-type Reinforcement Learning Algorithms under Markovian Sampling [J].
Xiong, Huaqing ;
Xu, Tengyu ;
Liang, Yingbin ;
Zhang, Wei .
THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 :10460-10468
[32]   Convergence analysis of dynamic stochastic approximation [J].
Chen, HF ;
Uosaki, K .
SYSTEMS & CONTROL LETTERS, 1998, 35 (05) :309-315
[33]   On asymptotic properties of continuous-time stochastic approximation type consensus protocols [J].
Tang, Huaibin ;
Li, Tao .
2014 IEEE 53RD ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2014, :2210-2215
[34]   Asymptotic analysis for stochastic volatility: Edgeworth expansion [J].
Fukasawa, Masaaki .
ELECTRONIC JOURNAL OF PROBABILITY, 2011, 16 :764-791
[35]   An approximation scheme for a class of risk-averse stochastic equilibrium problems [J].
Luna, Juan Pablo ;
Sagastizabal, Claudia ;
Solodov, Mikhail .
MATHEMATICAL PROGRAMMING, 2016, 157 (02) :451-481
[36]   ANALYSIS OF AN ASYMPTOTIC PRESERVING SCHEME FOR RELAXATION SYSTEMS [J].
Filbet, Francis ;
Rambaud, Amelie .
ESAIM-MATHEMATICAL MODELLING AND NUMERICAL ANALYSIS-MODELISATION MATHEMATIQUE ET ANALYSE NUMERIQUE, 2013, 47 (02) :609-633
[37]   Stochastic Successive Convex Approximation for Non-Convex Constrained Stochastic Optimization [J].
Liu, An ;
Lau, Vincent K. N. ;
Kananian, Borna .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (16) :4189-4203
[38]   Stochastic Approximation Approach for Consensus and Convergence Rate Analysis of Multiagent Systems [J].
Xu, Juanjuan ;
Zhang, Huanshui ;
Xie, Lihua .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2012, 57 (12) :3163-3168
[39]   Homeomorphic-Invariance of EM: Non-Asymptotic Convergence in KL Divergence for Exponential Families via Mirror Descent [J].
Kunstner, Frederik ;
Kumar, Raunak ;
Schmidt, Mark .
24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
[40]   Finite-Time Error Bounds of Biased Stochastic Approximation With Application to TD-Learning [J].
Wang, Gang .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2022, 70 :950-962