Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

被引:0
作者
Karimi, Belhal [1 ]
Miasojedow, Blazej [2 ]
Moulines, Eric [1 ]
Wai, Hoi-To [3 ]
机构
[1] Ecole Polytechn, CMAP, Palaiseau, France
[2] Univ Warsaw, Fac Math Informat & Mech, Warsaw, Poland
[3] Chinese Univ Hong Kong, Dept SEEM, Hong Kong, Peoples R China
来源
CONFERENCE ON LEARNING THEORY, VOL 99 | 2019年 / 99卷
关键词
biased stochastic approximation; state-dependent Markov chain; non-convex optimization; policy gradient; online expectation-maximization; GRADIENT; OPTIMIZATION; CONVERGENCE; ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates. We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.
引用
收藏
页数:31
相关论文
共 50 条
[21]   An approximation scheme of stochastic Stokes equations [J].
Yang, Juan ;
Liu, Hanbing .
ELECTRONIC COMMUNICATIONS IN PROBABILITY, 2013, 18 :1-10
[22]   Non-Asymptotic Uniform Rates of Consistency for k-NN Regression [J].
Jiang, Heinrich .
THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, :3999-4006
[23]   Non-asymptotic estimates for TUSLA algorithm for non-convex learning with applications to neural networks with ReLU activation function [J].
Lim, Dong-Young ;
Neufeld, Ariel ;
Sabanis, Sotirios ;
Zhang, Ying .
IMA JOURNAL OF NUMERICAL ANALYSIS, 2024, 44 (03) :1464-1559
[24]   An approximation scheme for stochastic controls in continuous time [J].
Nakano, Yumiharu .
JAPAN JOURNAL OF INDUSTRIAL AND APPLIED MATHEMATICS, 2014, 31 (03) :681-696
[25]   Convergence Rates for Stochastic Approximation: Biased Noise with Unbounded Variance, and Applications [J].
Karandikar, Rajeeva Laxman ;
Vidyasagar, Mathukumalli .
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2024, 203 (03) :2412-2450
[26]   Asymptotic Behaviors of Projected Stochastic Approximation: A Jump Diffusion Perspective [J].
Liang, Jiadong ;
Han, Yuze ;
Li, Xiang ;
Zhang, Zhihua .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
[27]   THE ODE METHOD FOR ASYMPTOTIC STATISTICS IN STOCHASTIC APPROXIMATION AND REINFORCEMENT LEARNING [J].
Borkar, Vivek ;
Chen, Shuhang ;
Devraj, Adithya ;
Kontoyiannis, Ioannis ;
Meyn, Sean .
ANNALS OF APPLIED PROBABILITY, 2025, 35 (02) :936-982
[28]   Distributed Stochastic Approximation: The Price of Non-double Stochasticity [J].
Morral, Gemma ;
Bianchi, Pascal ;
Fort, Gersende ;
Jakubowicz, Jeremie .
2012 CONFERENCE RECORD OF THE FORTY SIXTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS AND COMPUTERS (ASILOMAR), 2012, :1473-1477
[29]   An approximation scheme for stochastic linear programming and its application to stochastic integer programs [J].
Shmoys, David B. ;
Swamy, Chaitanya .
JOURNAL OF THE ACM, 2006, 53 (06) :978-1012
[30]   Tyler?s and Maronna?s M-estimators: Non-asymptotic concentration results [J].
Romanov, Elad ;
Kur, Gil ;
Nadler, Boaz .
JOURNAL OF MULTIVARIATE ANALYSIS, 2023, 196