Final Iteration Convergence Bound of Q-Learning: Switching System Approach

被引:4
作者
Lee, Donghwan [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Dept Elect Engn, Daejeon 34141, South Korea
关键词
Q-learning; Convergence; Switching systems; Switches; Symmetric matrices; Markov processes; Behavioral sciences; finite-time analysis; reinforcement learning (RL); switching system; STOCHASTIC-APPROXIMATION; RATES;
D O I
10.1109/TAC.2024.3355326
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Q-learning is known as one of the fundamental reinforcement learning (RL) algorithms. Its convergence has been the focus of extensive research over the past several decades. Recently, a new finite-time error bound and analysis for Q-learning was introduced using a switching system framework. This approach views the dynamics of Q-learning as a discrete-time stochastic switching system. The prior study established a finite-time error bound on the averaged iterates using Lyapunov functions, offering further insights into Q-learning. While valuable, the analysis focuses on error bounds of the averaged iterate, which comes with the inherent disadvantages: It necessitates extra averaging steps, which can decelerate the convergence rate. Moreover, the final iterate, being the original format of Q-learning, is more commonly used and is often regarded as a more intuitive and natural form in the majority of iterative algorithms. In this article, we present a finite-time error bound on the final iterate of Q-learning based on the switching system framework. The proposed error bounds have different features compared to the previous works, and cover different scenarios. Finally, we expect that the proposed results provide additional insights on Q-learning via connections with discrete-time switching systems, and can potentially present a new template for finite-time analysis of more general RL algorithms.
引用
收藏
页码:4765 / 4772
页数:8
相关论文
共 35 条
[1]   Error bounds for constant step-size Q-learning [J].
Beck, C. L. ;
Srikant, R. .
SYSTEMS & CONTROL LETTERS, 2012, 61 (12) :1203-1208
[2]  
Bellemare MG, 2017, PR MACH LEARN RES, V70
[3]   The ODE method for convergence of stochastic approximation and reinforcement learning [J].
Borkar, VS ;
Meyn, SP .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2000, 38 (02) :447-469
[4]  
Chen C.-T., 1995, Linear System Theory and Design
[5]  
Chen ZW, 2021, Arxiv, DOI arXiv:2102.01567
[6]  
Devraj AdithyaM., 2017, Proc. of the Intl. Conference on Neural Information Processing Systems, P2232
[7]  
Even-Dar E, 2003, J MACH LEARN RES, V5, P1
[8]  
Ghavamzadeh Mohammad., 2011, ADV NEURAL INFORM PR, P2411
[9]   Boundedness of iterates in Q-Learning [J].
Gosavi, A .
SYSTEMS & CONTROL LETTERS, 2006, 55 (04) :347-349
[10]  
Heess N, 2015, Arxiv, DOI arXiv:1512.04455