Fast and slow curiosity for high-level exploration in reinforcement learning

被引:17
作者
Bougie, Nicolas [1 ,2 ]
Ichise, Ryutaro [1 ,2 ]
机构
[1] Natl Inst Informat, Tokyo, Japan
[2] Grad Univ Adv Studies, Sokendai, Tokyo, Japan
关键词
Reinforcement learning; Exploration; Autonomous exploration; Curiosity in reinforcement learning; NETWORKS;
D O I
10.1007/s10489-020-01849-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep reinforcement learning (DRL) algorithms rely on carefully designed environment rewards that are extrinsic to the agent. However, in many real-world scenarios rewards are sparse or delayed, motivating the need for discovering efficient exploration strategies. While intrinsically motivated agents hold promise of better local exploration, solving problems that require coordinated decisions over long-time horizons remains an open problem. We postulate that to discover such strategies, a DRL agent should be able to combine local and high-level exploration behaviors. To this end, we introduce the concept of fast and slow curiosity that aims to incentivize long-time horizon exploration. Our method decomposes the curiosity bonus into a fast reward that deals with local exploration and a slow reward that encourages global exploration. We formulate this bonus as the error in an agent's ability to reconstruct the observations given their contexts. We further propose to dynamically weight local and high-level strategies by measuring state diversity. We evaluate our method on a variety of benchmark environments, including Minigrid, Super Mario Bros, and Atari games. Experimental results show that our agent outperforms prior approaches in most tasks in terms of exploration efficiency and mean scores.
引用
收藏
页码:1086 / 1107
页数:22
相关论文
共 63 条
  • [1] Achiam J., 2017, ARXIV170301732
  • [2] Achiam J., 2018, ARXIV180710299
  • [3] Andrychowicz Marcin, 2017, Advances in Neural Information Processing Systems, P5055
  • [4] [Anonymous], 2017, Intrinsically motivated goal exploration processes with automatic curriculum learning
  • [5] [Anonymous], 2018, INT C LEARN REPR
  • [6] [Anonymous], 2016, Exploratory gradient boosting for reinforcement learning in complex domains
  • [7] Baldi P., 2012, P ICML WORKSH UNS TR, P37, DOI DOI 10.1561/2200000006
  • [8] Active learning of inverse models with intrinsically motivated goal exploration in robots
    Baranes, Adrien
    Oudeyer, Pierre-Yves
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2013, 61 (01) : 49 - 73
  • [9] Bellemare MG, 2016, ADV NEUR IN, V29
  • [10] The Arcade Learning Environment: An Evaluation Platform for General Agents
    Bellemare, Marc G.
    Naddaf, Yavar
    Veness, Joel
    Bowling, Michael
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2013, 47 : 253 - 279