Skill-based curiosity for intrinsically motivated reinforcement learning

被引:2
作者
Nicolas Bougie
Ryutaro Ichise
机构
[1] The Graduate University for Advanced Studies,
[2] National Institute of Informatics,undefined
来源
Machine Learning | 2020年 / 109卷
关键词
Reinforcement learning; Exploration; Autonomous exploration; Curiosity in reinforcement learning;
D O I
暂无
中图分类号
学科分类号
摘要
Reinforcement learning methods rely on rewards provided by the environment that are extrinsic to the agent. However, many real-world scenarios involve sparse or delayed rewards. In such cases, the agent can develop its own intrinsic reward function called curiosity to enable the agent to explore its environment in the quest of new skills. We propose a novel end-to-end curiosity mechanism for deep reinforcement learning methods, that allows an agent to gradually acquire new skills. Our method scales to high-dimensional problems, avoids the need of directly predicting the future, and, can perform in sequential decision scenarios. We formulate the curiosity as the ability of the agent to predict its own knowledge about the task. We base the prediction on the idea of skill learning to incentivize the discovery of new skills, and guide exploration towards promising solutions. To further improve data efficiency and generalization of the agent, we propose to learn a latent representation of the skills. We present a variety of sparse reward tasks in MiniGrid, MuJoCo, and Atari games. We compare the performance of an augmented agent that uses our curiosity reward to state-of-the-art learners. Experimental evaluation exhibits higher performance compared to reinforcement learning models that only learn by maximizing extrinsic rewards.
引用
收藏
页码:493 / 512
页数:19
相关论文
共 30 条
[1]  
Baranes A(2013)Active learning of inverse models with intrinsically motivated goal exploration in robots Robotics and Autonomous Systems 61 49-73
[2]  
Oudeyer PY(2011)Abandoning objectives: Evolution through the search for novelty alone Evolutionary Computation 19 189-223
[3]  
Lehman J(2016)End-to-end training of deep visuomotor policies The Journal of Machine Learning Research 17 1334-1373
[4]  
Stanley KO(2015)Human-level control through deep reinforcement learning Nature 518 529-287
[5]  
Levine S(1999)Policy invariance under reward transformations: Theory and application to reward shaping Proceedings of the International Conference on Machine Learning 99 278-2819
[6]  
Finn C(2014)Changing the environment based on empowerment as intrinsic motivation Entropy 16 2789-1897
[7]  
Darrell T(2015)Trust region policy optimization Proceedings of the International Conference on International Conference on Machine Learning 37 1889-44
[8]  
Abbeel P(1988)Learning to predict by the methods of temporal differences Machine Learning 3 9-undefined
[9]  
Mnih V(undefined)undefined undefined undefined undefined-undefined
[10]  
Kavukcuoglu K(undefined)undefined undefined undefined undefined-undefined