On the sample complexity of actor-critic method for reinforcement learning with function approximation

被引:0
作者
Harshat Kumar
Alec Koppel
Alejandro Ribeiro
机构
[1] The University of Pennsylvania,Department of Electrical and Systems Engineering
[2] JPMorgan AI Research,undefined
来源
Machine Learning | 2023年 / 112卷
关键词
Actor-critic; Reinforcement learning; Markov decision process; Non-convex optimization; Stochastic programming;
D O I
暂无
中图分类号
学科分类号
摘要
Reinforcement learning, mathematically described by Markov Decision Problems, may be approached either through dynamic programming or policy search. Actor-critic algorithms combine the merits of both approaches by alternating between steps to estimate the value function and policy gradient updates. Due to the fact that the updates exhibit correlated noise and biased gradient updates, only the asymptotic behavior of actor-critic is known by connecting its behavior to dynamical systems. This work puts forth a new variant of actor-critic that employs Monte Carlo rollouts during the policy search updates, which results in controllable bias that depends on the number of critic evaluations. As a result, we are able to provide for the first time the convergence rate of actor-critic algorithms when the policy search step employs policy gradient, agnostic to the choice of policy evaluation technique. In particular, we establish conditions under which the sample complexity is comparable to stochastic gradient method for non-convex problems or slower as a result of the critic estimation error, which is the main complexity bottleneck. These results hold in continuous state and action spaces with linear function approximation for the value function. We then specialize these conceptual results to the case where the critic is estimated by Temporal Difference, Gradient Temporal Difference, and Accelerated Gradient Temporal Difference. These learning rates are then corroborated on a navigation problem involving an obstacle and the pendulum problem which provide insight into the interplay between optimization and generalization in reinforcement learning.
引用
收藏
页码:2433 / 2467
页数:34
相关论文
共 99 条
[61]  
Huang A(undefined)undefined undefined undefined undefined-undefined
[62]  
Guez A(undefined)undefined undefined undefined undefined-undefined
[63]  
Hubert T(undefined)undefined undefined undefined undefined-undefined
[64]  
Baker L(undefined)undefined undefined undefined undefined-undefined
[65]  
Lai M(undefined)undefined undefined undefined undefined-undefined
[66]  
Bolton A(undefined)undefined undefined undefined undefined-undefined
[67]  
Sutton RS(undefined)undefined undefined undefined undefined-undefined
[68]  
Sutton RS(undefined)undefined undefined undefined undefined-undefined
[69]  
McAllester DA(undefined)undefined undefined undefined undefined-undefined
[70]  
Singh SP(undefined)undefined undefined undefined undefined-undefined