Stochastic Policy Gradient Ascent in Reproducing Kernel Hilbert Spaces

被引:12
作者
Paternain, Santiago [1 ]
Bazerque, Juan Andres [2 ]
Small, Austin [1 ]
Ribeiro, Alejandro [1 ]
机构
[1] Univ Penn, Dept Elect & Syst Engn, Philadelphia, PA 19104 USA
[2] Univ Republica, Fac Ingn, Dept Ingn Elect, Montevideo 11800, Barrio, Uruguay
关键词
Stochastic processes; Kernel; Convergence; Complexity theory; Trajectory; Hilbert space; Standards; Autonomous systems; gradient methods; markov processes; unsepervised Learning;
D O I
10.1109/TAC.2020.3029317
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reinforcement learning consists of finding policies that maximize an expected cumulative long-term reward in a Markov decision process with unknown transition probabilities and instantaneous rewards. In this article, we consider the problem of finding such optimal policies while assuming they are continuous functions belonging to a reproducing kernel Hilbert space (RKHS). To learn the optimal policy, we introduce a stochastic policy gradient ascent algorithm with the following three unique novel features. First, the stochastic estimates of policy gradients are unbiased. Second, the variance of stochastic gradients is reduced by drawing on ideas from numerical differentiation. Four, policy complexity is controlled using sparse RKHS representations. Novel feature, first, is instrumental in proving convergence to a stationary point of the expected cumulative reward. Novel feature, second, facilitates reasonable convergence times. Novel feature, third, is a necessity in practical implementations, which we show can be done in a way that does not eliminate convergence guarantees. Numerical examples in standard problems illustrate successful learning of policies with low complexity representations, which are close to stationary points of the expected cumulative reward.
引用
收藏
页码:3429 / 3444
页数:16
相关论文
共 34 条
[1]  
[Anonymous], 2013, Playing atari with deep reinforcement learning
[2]  
[Anonymous], 2011, 2011001 CORE LIDAM U
[3]  
[Anonymous], 2014, ICML ICML 14
[4]  
[Anonymous], 2009, Advances in Neural Information Processing Systems
[5]  
Argyriou A, 2009, J MACH LEARN RES, V10, P2507
[6]  
Bertsekas D. P., 1999, NONLINEAR PROGRAMMIN, V2nd
[7]  
Deisenroth MP., 2013, FOUND TRENDS ROBOT, V2, P1, DOI DOI 10.1561/2300000021
[8]  
Durrett R., 2010, PROBABILITY THEORY E
[9]   STOCHASTIC FIRST- AND ZEROTH-ORDER METHODS FOR NONCONVEX STOCHASTIC PROGRAMMING [J].
Ghadimi, Saeed ;
Lan, Guanghui .
SIAM JOURNAL ON OPTIMIZATION, 2013, 23 (04) :2341-2368
[10]  
Howard R. A., 1964, DYNAMIC PROGRAMMING