Reward-respecting subtasks for model-based reinforcement learning

被引:4
作者
Suttona, Richard S. [1 ,2 ,3 ,4 ]
Machado, Marlos C. [1 ,2 ,3 ]
Holland, Zacharias
Szepesvari, David
Timbers, Finbarr
Tanner, Brian
White, Adam [1 ,2 ,3 ]
机构
[1] DeepMind, Edmonton, AB, Canada
[2] Univ Alberta, Edmonton, AB, Canada
[3] Alberta Machine Intelligence Inst Amii, Edmonton, AB, Canada
[4] Canada CIFAR Chair Amii, Toronto, ON, Canada
关键词
Planning; Model -based reinforcement learning; Temporal abstraction; Options; Feature attainment; STOMP progression; REPRESENTATIONS; ABSTRACTION;
D O I
10.1016/j.artint.2023.104001
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To achieve the ambitious goals of artificial intelligence, reinforcement learning must include planning with a model of the world that is abstract in state and time. Deep learning has made progress with state abstraction, but temporal abstraction has rarely been used, despite extensively developed theory based on the options framework. One reason for this is that the space of possible options is immense, and the methods previously proposed for option discovery do not take into account how the option models will be used in planning. Options are typically discovered by posing subsidiary tasks, such as reaching a bottleneck state or maximizing the cumulative sum of a sensory signal other than reward. Each subtask is solved to produce an option, and then a model of the option is learned and made available to the planning process. In most previous work, the subtasks ignore the reward on the original problem, whereas we propose subtasks that use the original reward plus a bonus based on a feature of the state at the time the option terminates. We show that option models obtained from such reward-respecting subtasks are much more likely to be useful in planning than eigenoptions, shortest path options based on bottleneck states, or reward-respecting options generated by the option-critic. Reward respecting subtasks strongly constrain the space of options and thereby also provide a partial solution to the problem of option discovery. Finally, we show how values, policies, options, and models can all be learned online and off-policy using standard algorithms and general value functions.(c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons .org /licenses /by /4 .0/).
引用
收藏
页数:17
相关论文
共 42 条
[1]  
[Anonymous], 2011, P INT C AUT AG MULT
[2]  
[Anonymous], 2018, P INT C LEARN REPR
[3]  
Bacon PL, 2017, AAAI CONF ARTIF INTE, P1726
[4]   Active learning of inverse models with intrinsically motivated goal exploration in robots [J].
Baranes, Adrien ;
Oudeyer, Pierre-Yves .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2013, 61 (01) :49-73
[5]  
Barto A. G., 2004, P INT C MACH LEARN
[6]  
Deisenroth M., 2011, P 28 INT C MACH LEAR, P465
[7]  
Eysenbach B., 2019, INT C LEARNING REPRE
[8]  
Harb J., 2018, P ASS ADV ART INT
[9]  
Hasselt H, 2010, Adv Neural Inf Process Syst, V23
[10]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]