Behavior Priors for Efficient Reinforcement Learning

被引:0
作者
Tirumala, Dhruva [1 ,2 ]
Galashov, Alexandre [1 ]
Noh, Hyeonwo [1 ,3 ]
Hasenclever, Leonard [1 ]
Pascanu, Razvan [1 ]
Schwarz, Jonathan [1 ,2 ]
Desjardins, Guillaume [1 ]
Czarnecki, Wojciech Marian [1 ]
Ahuja, Arun [1 ]
Teh, Yee Whye [1 ,2 ]
Heess, Nicolas [1 ,2 ]
机构
[1] DeepMind, R7,14-18 Handyside St, London, England
[2] UCL, London WC1E 6BT, England
[3] OpenAI, 3180 18th St, San Francisco, CA 94110 USA
关键词
reinforcement learning; probabilistic graphical models; control as inference; hierarchical reinforcement learning; transfer learning; ENTROPY;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As we deploy reinforcement learning agents to solve increasingly challenging problems, methods that allow us to inject prior knowledge about the structure of the world and ef-fective solution strategies becomes increasingly important. In this work we consider how information and architectural constraints can be combined with ideas from the proba-bilistic modeling literature to learn behavior priors that capture the common movement and interaction patterns that are shared across a set of related tasks or contexts. For example the day-to day behavior of humans comprises distinctive locomotion and manip-ulation patterns that recur across many different situations and goals. We discuss how such behavior patterns can be captured using probabilistic trajectory models and how these can be integrated effectively into reinforcement learning schemes, e.g. to facilitate multi-task and transfer learning. We then extend these ideas to latent variable mod-els and consider a formulation to learn hierarchical priors that capture different aspects of the behavior in reusable modules. We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual informa-tion and curiosity based objectives, thereby offering an alternative perspective on existing ideas. We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains, videos of which can be found at the following url: https://sites.google.com/view/behavior-priors.
引用
收藏
页数:68
相关论文
共 128 条
[1]  
Alemi AA, 2018, Arxiv, DOI [arXiv:1711.00464, 10.48550/arXiv.1711.00464]
[2]  
Alemi AA, 2019, Arxiv, DOI arXiv:1612.00410
[3]  
Abdolmaleki A, 2018, Arxiv, DOI [arXiv:1812.02256, DOI 10.48550/ARXIV.1812.02256]
[4]  
Abdolmaleki Abbas, 2018, INT C LEARNING REPRE
[5]  
Agakov FV, 2004, LECT NOTES COMPUT SC, V3316, P561
[6]  
Ahmed Z, 2019, PR MACH LEARN RES, V97
[7]  
Andrychowicz M, 2019, Arxiv, DOI arXiv:1808.00177
[8]  
Andrychowicz Marcin, 2018, Hindsight experience replay
[9]  
Bacon PL, 2016, Arxiv, DOI arXiv:1609.05140
[10]  
Bacon PL, 2017, AAAI CONF ARTIF INTE, P1726