A tutorial on linear function approximators for dynamic programming and reinforcement learning

被引:68
作者
Geramifard, Alborz [1 ]
Walsh, Thomas J. [1 ]
Tellex, Stefanie [2 ]
Chowdhary, Girish [1 ]
Roy, Nicholas [2 ]
How, Jonathan P. [1 ]
机构
[1] MIT LIDS, United States
[2] MIT CSAIL, United States
来源
Foundations and Trends in Machine Learning | 2013年 / 6卷 / 04期
关键词
D O I
10.1561/2200000042
中图分类号
学科分类号
摘要
A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This article reviews such algorithms, beginning with well-known dynamic programming methods for solving MDPs such as policy iteration and value iteration, then describes approximate dynamic programming methods such as trajectory based value iteration, and finally moves to reinforcement learning methods such as Q-Learning, SARSA, and least-squares policy iteration. We describe algorithms in a unified framework, giving pseudocode together with memory and iteration complexity analysis for each. Empirical evaluations of these techniques with four representations across four domains, provide insight into how these algorithms perform with various feature sets in terms of running time and performance.© 2013 A. Geramifard, T. J. Walsh, S. Tellex, G. Chowdhary.
引用
收藏
页码:375 / 454
页数:79
相关论文
共 107 条
[1]  
RL Competition, (2012)
[2]  
Ahmadi M., Taylor M.E., Stone P., IFSA: Incremental feature-set augmentation for reinforcement learning tasks, International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 1-8, (2007)
[3]  
Antos A., Munos R., Szepesvari C., Fitted Q-iteration in continuous action-space MDPs, Proceedings of Neural Information Processing Systems Conference (NIPS), (2007)
[4]  
Antos A., Szepesvari C., Munos R., Learning near-optimal policies with bellman-residual minimization based fitted policy iteration and a single sample path, Machine Learning, 71, 1, pp. 89-129, (2008)
[5]  
Asmuth J., Li L., Littman M., Nouri A., Wingate D., A bayesian sampling approach to exploration in reinforcement learning, International Conference on Uncertainty in Artificial Intelligence (UAI), pp. 19-26, (2009)
[6]  
Baird L.C., Residual algorithms: Reinforcement learning with function approximation, ICML, pp. 30-37, (1995)
[7]  
Barreto A.D.M.S., Anderson C.W., Restricted gradient-descent algorithm for value-function approximation in reinforcement learning, Artificial Intelligence, 172, pp. 454-482, (2008)
[8]  
Barto A., Duff M., Monte carlo matrix inversion and reinforcement learning, Neural Information Processing Systems (NIPS), pp. 687-694, (1994)
[9]  
Barto A., Bradtke S., Singh S., Learning to act using real-time dynamic programming, Artificial Intelligence, 72, pp. 81-138, (1995)
[10]  
Baxter J., Bartlett P., Direct gradient-based reinforcement learning, Circuits and Systems, (2000)