Integrating Sample-Based Planning and Model-Based Reinforcement Learning

被引:0
作者
Walsh, Thomas J. [1 ]
Goschin, Sergiu [1 ]
Littman, Michael L. [1 ]
机构
[1] Rutgers State Univ, Dept Comp Sci, Piscataway, NJ 08854 USA
来源
PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-10) | 2010年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in model-based reinforcement learning have shown that the dynamics of many structured domains (e.g. DBNs) can be learned with tractable sample complexity, despite their exponentially large state spaces. Unfortunately, these algorithms all require access to a planner that computes a near optimal policy, and while many traditional MDP algorithms make this guarantee, their computation time grows with the number of states. We show how to replace these over-matched planners with a class of sample-based planners-whose computation time is independent of the number of states-without sacrificing the sample-efficiency guarantees of the overall learning algorithms. To do so, we define sufficient criteria for a sample-based planner to be used in such a learning system and analyze two popular sample-based approaches from the literature. We also introduce our own sample-based planner, which combines the strategies from these algorithms and stillmeets the criteria for integration into our learning system. In doing so, we define the first complete RL solution for compactly represented (exponentially sized) state spaces with efficiently learnable dynamics that is both sample efficient and whose computation time does not grow rapidly with the number of states.
引用
收藏
页码:612 / 617
页数:6
相关论文
共 17 条
  • [1] [Anonymous], ECML
  • [2] Balla R.-K., 2009, IJCAI
  • [3] Stochastic dynamic programming with factored representations
    Boutilier, C
    Dearden, R
    Goldszmidt, M
    [J]. ARTIFICIAL INTELLIGENCE, 2000, 121 (1-2) : 49 - 107
  • [4] Coquelin P.-A., 2007, UAI
  • [5] Croonenborghs T., 2007, IJCAI
  • [6] Degris T., 2006, ICML
  • [7] Gelly Sylvain, 2007, ICML
  • [8] A sparse sampling algorithm for near-optimal planning in large Markov decision processes
    Kearns, M
    Mansour, Y
    Ng, AY
    [J]. MACHINE LEARNING, 2002, 49 (2-3) : 193 - 208
  • [9] Lang T., 2009, ICML
  • [10] Li L., 2009, THESIS