Reinforcement Learning with General Value Function Approximation: Provably Efficient Approach via Bounded Eluder Dimension

被引:0
作者
Wang, Ruosong [1 ]
Salakhutdinov, Ruslan [1 ]
Yang, Lin F. [2 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Univ Calif Los Angeles, Los Angeles, CA USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020 | 2020年 / 33卷
关键词
PAC BOUNDS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Value function approximation has demonstrated phenomenal empirical success in reinforcement learning (RL). Nevertheless, despite a handful of recent progress on developing theory for RL with linear function approximation, the understanding of general function approximation schemes largely remains missing. In this paper, we establish the first provably efficient RL algorithm with general value function approximation. We show that if the value functions admit an approximation with a function class F, our algorithm achieves a regret bound of (O) over tilde (poly(dH)root T) where d is a complexity measure of F that depends on the eluder dimension [Russo and Van Roy, 2013] and log-covering numbers, H is the planning horizon, and T is the number interactions with the environment. Our theory generalizes the linear MDP assumption to general function classes. Moreover, our algorithm is model-free and provides a framework to justify the effectiveness of algorithms used in practice.
引用
收藏
页数:13
相关论文
共 69 条
  • [1] Abbasi-Yadkori Yasin, 2011, ADV NEURAL INFORM PR, P2312
  • [2] Agrawal S., 2017, Advances in Neural Information Processing Systems, P1184
  • [3] Akkaya Andrychowicz Chociej Litwin McGrew Petron Paino Plappert Powell Ribas Schneider Tezak Tworek Welinder Weng Yuan Zaremba Zhang I. M. M. M. B. A. A. M. G. R. J. N. J. P. L. Q. W. L., 2019, arXiv
  • [4] [Anonymous], 2017, ARXIV170305449
  • [5] [Anonymous], 2019, ADV NEUR IN
  • [6] Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path
    Antos, Andras
    Szepesvari, Csaba
    Munos, Remi
    [J]. MACHINE LEARNING, 2008, 71 (01) : 89 - 129
  • [7] Ayoub A, 2020, PR MACH LEARN RES, V119
  • [8] Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model
    Azar, Mohammad Gheshlaghi
    Munos, Remi
    Kappen, Hilbert J.
    [J]. MACHINE LEARNING, 2013, 91 (03) : 325 - 349
  • [9] Bertsekas D. P., 1996, Neuro-Dynamic Programming
  • [10] Cai Q., 2019, ARXIV191205830