Approximate Newton methods for policy search in markov decision processes

被引:0
作者
Furmston, Thomas [1 ]
Lever, Guy [1 ]
Barber, David [1 ]
机构
[1] Department of Computer Science, University College London, London,WC1E 6BT, United Kingdom
关键词
Approximate Newton methods are standard optimization tools which aim to maintain the benefits of Newton's method; such as a fast rate of convergence; while alleviating its drawbacks; such as computationally expensive calculation or estimation of the inverse Hessian. In this work we investigate approximate Newton methods for policy optimization in Markov decision processes (MDPs). We first analyse the structure of the Hessian of the total expected reward; which is a standard objective function for MDPs. We show that; like the gradient; the Hessian exhibits useful structure in the context of MDPs and we use this analysis to motivate two Gauss-Newton methods for MDPs. Like the Gauss-Newton method for non-linear least squares; these methods drop certain terms in the Hessian. The approximate Hessians possess desirable properties; such as negative definiteness; and we demonstrate several important performance guarantees including guaranteed ascent directions; invariance to afine transformation of the parameter space and convergence guarantees. We finally provide a unifying perspective of key policy search algorithms; demonstrating that our second Gauss-Newton algorithm is closely related to both the EMalgorithm and natural gradient ascent applied to MDPs; but performs significantly better in practice on a range of challenging domains. © 2016 Thomas Furmston; Guy Lever; and David Barber;
D O I
暂无
中图分类号
学科分类号
摘要
引用
收藏
页码:1 / 51
相关论文
empty
未找到相关数据