Convergence of a Q-learning Variant for Continuous States and Actions

被引:5
作者
Carden, Stephen [1 ]
机构
[1] Clemson Univ, Dept Math Sci, Clemson, SC 29631 USA
关键词
D O I
10.1613/jair.4271
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a reinforcement learning algorithm for solving infinite horizon Markov Decision Processes under the expected total discounted reward criterion when both the state and action spaces are continuous. This algorithm is based on Watkins' Q-learning, but uses Nadaraya-Watson kernel smoothing to generalize knowledge to unvisited states. As expected, continuity conditions must be imposed on the mean rewards and transition probabilities. Using results from kernel regression theory, this algorithm is proven capable of producing a Q-value function estimate that is uniformly within an arbitrary tolerance of the true Q-value function with probability one. The algorithm is then applied to an example problem to empirically show convergence as well.
引用
收藏
页码:705 / 731
页数:27
相关论文
共 36 条
  • [1] Albus J. S., 1975, Transactions of the ASME. Series G, Journal of Dynamic Systems, Measurement and Control, V97, P220, DOI 10.1115/1.3426922
  • [2] Baird L. C., 1993, WLTR931147 WRIGHT PA
  • [3] A COUNTEREXAMPLE TO TEMPORAL DIFFERENCES LEARNING
    BERTSEKAS, DP
    [J]. NEURAL COMPUTATION, 1995, 7 (02) : 270 - 279
  • [4] Billingsley P., 1999, Convergence of Probability Measures, V2nd ed., DOI DOI 10.1002/9780470316962
  • [5] Ernst D, 2005, J MACH LEARN RES, V6, P503
  • [6] Fairbank M., 2012, P IEEE INT JOINT C N
  • [7] Gaskett C, 1999, LECT NOTES ARTIF INT, V1747, P417
  • [8] Gordon G., 1996, CHATTERING SARSA LAM
  • [9] Gyorfi L., 1985, WILEY SERIES PROBABI
  • [10] Hansen B. E., 2008, ECONOMETRIC THEORY