Cover Tree Bayesian Reinforcement Learning

被引:0
作者
Tziortziotis, Nikolaos [1 ]
Dimitrakakis, Christos [2 ]
Blekas, Konstantinos [1 ]
机构
[1] Univ Ioannina, Dept Comp Sci & Engn, GR-45110 Ioannina, Greece
[2] Chalmers Univ Technol, Dept Comp Sci & Engn, SE-41296 Gothenburg, Sweden
关键词
Bayesian inference; non-parametric statistics; reinforcement learning; POLYA TREE; PREDICTION;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes an online tree-based I3ayesian approach for reinforcement learning. For inference, WC employ a generalised context tree model. This defines a distribution On multivariate Gttussian piecewise-linear models, vhich can be updated in closed form. The tree structure itself k constructed using the cover tree inethocl, v1iidi remains efficient in high dimensional spaces. We cOITIbine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration policies in unkritAVri environments. The flexibility and computational simplicity of the model render it suitable for many reinforcement learning problems in continuous state spaces. We demonstrate this in an experimental comparison with a Gaussian process model, a linear model and simple least squares policy iteration.
引用
收藏
页码:2313 / 2335
页数:23
相关论文
共 56 条
[1]  
Alvarez M., 2011, INT C ART INT STAT A, P25
[2]  
[Anonymous], 2012, ADV NEURAL INFORM PR
[3]  
[Anonymous], 2008, NIPS
[4]  
[Anonymous], 1996, Neuro-dynamic programming
[5]  
[Anonymous], 2003, J. Mach. Learn. Res.
[6]  
[Anonymous], 2013, Advances in Neural Information Processing Systems
[7]  
[Anonymous], 2005, P INT C MACH LEARN, DOI [10.1145/1102351.1102352, 10.1145/1102351, DOI 10.1145/1102351]
[8]  
[Anonymous], 2012, COLT 2012 25 ANN C L
[9]  
[Anonymous], P 10 EUR WORKSH REIN
[10]  
Araya M., 2012, Proceedings of the 29th International Conference on Machine Learning, P97