Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

被引:0
作者
Karimi, Belhal [1 ]
Miasojedow, Blazej [2 ]
Moulines, Eric [1 ]
Wai, Hoi-To [3 ]
机构
[1] Ecole Polytechn, CMAP, Palaiseau, France
[2] Univ Warsaw, Fac Math Informat & Mech, Warsaw, Poland
[3] Chinese Univ Hong Kong, Dept SEEM, Hong Kong, Peoples R China
来源
CONFERENCE ON LEARNING THEORY, VOL 99 | 2019年 / 99卷
关键词
biased stochastic approximation; state-dependent Markov chain; non-convex optimization; policy gradient; online expectation-maximization; GRADIENT; OPTIMIZATION; CONVERGENCE; ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates. We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.
引用
收藏
页数:31
相关论文
共 50 条
[41]   Finite-Time Error Bounds of Biased Stochastic Approximation With Application to TD-Learning [J].
Wang, Gang .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2022, 70 :950-962
[42]   Smoothed functional-based gradient algorithms for off-policy reinforcement learning: A non-asymptotic viewpoint [J].
Vijayan, Nithia ;
Prashanth, L. A. .
SYSTEMS & CONTROL LETTERS, 2021, 155
[43]   Homeomorphic-Invariance of EM: Non-Asymptotic Convergence KL Divergence for Exponential Families via Mirror Descent [J].
Kunstner, Frederik ;
Kumar, Raunak ;
Schmidt, Mark .
PROCEEDINGS OF THE THIRTY-FIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2022, 2022, :5294-5298
[44]   Fixed-Dimensional Stochastic Dynamic Programs: An Approximation Scheme and an Inventory Application [J].
Chen, Wei ;
Dawande, Milind ;
Janakiraman, Ganesh .
OPERATIONS RESEARCH, 2014, 62 (01) :81-103
[45]   The Caratheodory approximation scheme for stochastic differential equations with G-Levy process [J].
Ullah, Rahman ;
Faizullah, Faiz ;
Ul Islam, Naeem .
MATHEMATICAL METHODS IN THE APPLIED SCIENCES, 2023, 46 (13) :14120-14130
[46]   Finite-sample analysis of nonlinear stochastic approximation with applications in reinforcement learning [J].
Chen, Zaiwei ;
Zhang, Sheng ;
Doan, Thinh T. ;
Clarke, John-Paul ;
Maguluri, Siva Theja .
AUTOMATICA, 2022, 146
[47]   Non-asymptotic and robust estimation for fractional order pseudo-state space model using an algebraic parametric method [J].
Wang, Jia-Chang ;
Liu, Da-Yan ;
Boutat, Driss ;
Wang, Yong ;
Wu, Ze-Hao .
DIGITAL SIGNAL PROCESSING, 2023, 134
[48]   Asymptotic Study of Stochastic Adaptive Algorithms in Non-convex Landscape [J].
Gadat, Sebastien ;
Gavra, Ioana .
JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
[49]   Analysis of biased stochastic gradient descent using sequential semidefinite programs [J].
Hu, Bin ;
Seiler, Peter ;
Lessard, Laurent .
MATHEMATICAL PROGRAMMING, 2021, 187 (1-2) :383-408
[50]   Validation analysis of mirror descent stochastic approximation method [J].
Lan, Guanghui ;
Nemirovski, Arkadi ;
Shapiro, Alexander .
MATHEMATICAL PROGRAMMING, 2012, 134 (02) :425-458