Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

被引:0
作者
Li, Gen [1 ]
Wei, Yuting [2 ]
Chi, Yuejie [2 ]
Gu, Yuantao [1 ]
Chen, Yuxin
机构
[1] Tsinghua, Beijing, Peoples R China
[2] CMU, Pittsburgh, PA USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020 | 2020年 / 33卷
关键词
STOCHASTIC-APPROXIMATION; CONVERGENCE; BOUNDS; RATES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP), based on a single trajectory of Markovian samples induced by a behavior policy. Focusing on a gamma-discounted MDP with state space S and action space A, we demonstrate that the R-infinity-based sample complexity of classical asynchronous Q-learning - namely, the number of samples needed to yield an entrywise epsilon-accurate estimate of the Q-function - is at most on the order of 1/mu(min)(1 - gamma)(5)epsilon(2) + t(mix)/mu(min)(1 - gamma) up to some logarithmic factor, provided that a proper constant learning rate is adopted. Here, t(mix) and mu(min) denote respectively the mixing time and the minimum state-action occupancy probability of the sample trajectory. The first term of this bound matches the complexity in the case with independent samples drawn from the stationary distribution of the trajectory. The second term reflects the expense taken for the empirical distribution of the Markovian trajectory to reach a steady state, which is incurred at the very beginning and becomes amortized as the algorithm runs. Encouragingly, the above bound improves upon the state-of-the-art result by a factor of at least vertical bar S vertical bar vertical bar A vertical bar. Further, the scaling on the discount complexity can be improved by means of variance reduction.
引用
收藏
页数:13
相关论文
共 54 条
[1]  
Agarwal Alekh, 2019, ARXIV190603804
[2]  
[Anonymous], 2011, Technical report
[3]  
[Anonymous], 2019, ADV NEURAL INFORM PR, DOI DOI 10.1109/PIMRC.2019.8904340
[4]  
[Anonymous], 2013, ADV NEURAL INFORM PR
[5]   Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model [J].
Azar, Mohammad Gheshlaghi ;
Munos, Remi ;
Kappen, Hilbert J. .
MACHINE LEARNING, 2013, 91 (03) :325-349
[6]  
Bai Y, 2019, ADV NEUR IN, V32
[7]   Error bounds for constant step-size Q-learning [J].
Beck, C. L. ;
Srikant, R. .
SYSTEMS & CONTROL LETTERS, 2012, 61 (12) :1203-1208
[8]  
Bhandari Jalaj, 2018, C LEARN THEOR COLT, P1691
[9]   The ODE method for convergence of stochastic approximation and reinforcement learning [J].
Borkar, VS ;
Meyn, SP .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2000, 38 (02) :447-469
[10]  
Cai Q., 2019, Advances in Neural Information Processing Systems, P11312