Model-based Reinforcement Learning: A Survey

被引:229
作者
Moerland, Thomas M. [1 ]
Broekens, Joost [1 ]
Plaat, Aske [1 ]
Jonker, Catholijn M. [1 ,2 ]
机构
[1] Leiden Univ, LIACS, Leiden, Netherlands
[2] Delft Univ Technol, Interact Intelligence, Delft, Netherlands
来源
FOUNDATIONS AND TRENDS IN MACHINE LEARNING | 2023年 / 16卷 / 01期
关键词
EXPLORATION; ENVIRONMENT; ALGORITHMS; FRAMEWORK; NETWORKS; EMOTION; SYSTEMS; ROBOTS; AGENTS;
D O I
10.1561/2200000086
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sequential decision making, commonly formalized as Markov Decision Process (MDP) optimization, is an important challenge in artificial intelligence. Two key approaches to this problem are reinforcement learning (RL) and planning. This survey is an integration of both fields, better known as model-based reinforcement learning. Model-based RL has two main steps. First, we systematically cover approaches to dynamics model learning, including challenges like dealing with stochasticity, uncertainty, partial observability, and temporal abstraction. Second, we present a systematic categorization of planning-learning integration, including aspects like: where to start planning, what budgets to allocate to planning and real data collection, how to plan, and how to integrate planning in the learning and acting loop. After these two sections, we also discuss implicit model-based RL as an end-to-end alternative for model learning and planning, and we cover the potential benefits of model-based RL. Along the way, the survey also draws connections to several related RL fields, like hierarchical RL and transfer learning. Altogether, the survey presents a broad conceptual overview of the combination of planning and learning for MDP optimization.
引用
收藏
页码:1 / 118
页数:118
相关论文
共 326 条
[1]  
Abbeel P., 2005, Advances in neural information processing systems, P1
[2]  
Achiam J, 2018, Arxiv, DOI arXiv:1807.10299
[3]  
Achiam J, 2017, Arxiv, DOI arXiv:1703.01732
[4]   Solving the Rubik's cube with deep reinforcement learning and search [J].
Agostinelli, Forest ;
McAleer, Stephen ;
Shmakov, Alexander ;
Baldi, Pierre .
NATURE MACHINE INTELLIGENCE, 2019, 1 (08) :356-363
[5]  
Agrawal P, 2016, ADV NEUR IN, V29
[6]  
Agrawal S., 2017, Advances in Neural Information Processing Systems, V30, P1184
[7]  
Amodei D, 2016, Arxiv, DOI [arXiv:1606.06565, 10.48550/arXiv.1606.06565]
[8]  
Anand A, 2019, ADV NEUR IN, V32
[9]  
[Anonymous], 2006, P AAAI WORKSH LEARN
[10]  
[Anonymous], 2008, Proc. of the 8th Conf. on Epigenetic Robotics