Q-learning with Nearest Neighbors

被引:0
作者
Shah, Devavrat [1 ,2 ]
Xie, Qiaomin [1 ]
机构
[1] MIT, LIDS, Cambridge, MA 02139 USA
[2] MIT, Stat & Data Sci Ctr, Dept EECS, Cambridge, MA 02139 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018) | 2018年 / 31卷
关键词
MARKOV DECISION-PROCESSES; CONVERGENCE; APPROXIMATION; TREES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a d-dimensional state space and the discounted factor gamma is an element of (0, 1), given an arbitrary sample path with "covering time" L, we establish that the algorithm is guaranteed to output an E-accurate estimate of the optimal Q-function using (O) over tilde (L/(epsilon(3)(1 - gamma)(7))) samples. For instance, for a wellbehaved MDP, the covering time of the sample path under the purely random policy scales as (O) over tilde (1/epsilon(d)), so the sample complexity scales as (O) over tilde (1/epsilon(d+3)). Indeed, we establish a lower bound that argues that the dependence of (Omega) over tilde (1/epsilon(d+2)) is necessary.
引用
收藏
页数:11
相关论文
共 50 条
[41]   The Blessing of Heterogeneity in Federated Q-Learning: Linear Speedup and Beyond [J].
Woo, Jiin ;
Joshi, Gauri ;
Chi, Yuejie .
JOURNAL OF MACHINE LEARNING RESEARCH, 2025, 26 :1-85
[42]   Q-Learning with Side Information in Multi-Agent Finite Games [J].
Sylvestre, Mathieu ;
Pavel, Lacra .
2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, :5032-5037
[43]   Adaptive UAV-Assisted Geographic Routing With Q-Learning in VANET [J].
Jiang, Shanshan ;
Huang, Zhitong ;
Ji, Yuefeng .
IEEE COMMUNICATIONS LETTERS, 2021, 25 (04) :1358-1362
[44]   Continuous Time q-Learning for Mean-Field Control Problems [J].
Wei, Xiaoli ;
Yu, Xiang .
APPLIED MATHEMATICS AND OPTIMIZATION, 2025, 91 (01)
[45]   Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction [J].
Li, Gen ;
Wei, Yuting ;
Chi, Yuejie ;
Gu, Yuantao ;
Chen, Yuxin .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (01) :448-473
[46]   Q-learning and policy iteration algorithms for stochastic shortest path problems [J].
Huizhen Yu ;
Dimitri P. Bertsekas .
Annals of Operations Research, 2013, 208 :95-132
[47]   Final Iteration Convergence Bound of Q-Learning: Switching System Approach [J].
Lee, Donghwan .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (07) :4765-4772
[48]   Addressing Environment Non-Stationarity by Repeating Q-learning Updates [J].
Abdallah, Sherief ;
Kaisers, Michael .
JOURNAL OF MACHINE LEARNING RESEARCH, 2016, 17
[49]   Discrete-Time Deterministic Q-Learning: A Novel Convergence Analysis [J].
Wei, Qinglai ;
Lewis, Frank L. ;
Sun, Qiuye ;
Yan, Pengfei ;
Song, Ruizhuo .
IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (05) :1224-1237
[50]   Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction [J].
Li, Gen ;
Wei, Yuting ;
Chi, Yuejie ;
Gu, Yuantao ;
Chen, Yuxin .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33