Fast and slow curiosity for high-level exploration in reinforcement learning

被引:19
作者
Bougie, Nicolas [1 ,2 ]
Ichise, Ryutaro [1 ,2 ]
机构
[1] Natl Inst Informat, Tokyo, Japan
[2] Grad Univ Adv Studies, Sokendai, Tokyo, Japan
关键词
Reinforcement learning; Exploration; Autonomous exploration; Curiosity in reinforcement learning; NETWORKS;
D O I
10.1007/s10489-020-01849-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep reinforcement learning (DRL) algorithms rely on carefully designed environment rewards that are extrinsic to the agent. However, in many real-world scenarios rewards are sparse or delayed, motivating the need for discovering efficient exploration strategies. While intrinsically motivated agents hold promise of better local exploration, solving problems that require coordinated decisions over long-time horizons remains an open problem. We postulate that to discover such strategies, a DRL agent should be able to combine local and high-level exploration behaviors. To this end, we introduce the concept of fast and slow curiosity that aims to incentivize long-time horizon exploration. Our method decomposes the curiosity bonus into a fast reward that deals with local exploration and a slow reward that encourages global exploration. We formulate this bonus as the error in an agent's ability to reconstruct the observations given their contexts. We further propose to dynamically weight local and high-level strategies by measuring state diversity. We evaluate our method on a variety of benchmark environments, including Minigrid, Super Mario Bros, and Atari games. Experimental results show that our agent outperforms prior approaches in most tasks in terms of exploration efficiency and mean scores.
引用
收藏
页码:1086 / 1107
页数:22
相关论文
共 63 条
[21]  
Forestier S., 2017, ARXIV PREPRINT ARXIV
[22]  
Haarnoja T, 2018, PR MACH LEARN RES, V80
[23]  
Han D., 2013, C INT C COMP SCI EL, P1556, DOI [10.2991/iccsee.2013.391, 10.2991/iccsee.2013.391.2]
[24]   Neuroscience-Inspired Artificial Intelligence [J].
Hassabis, Demis ;
Kumaran, Dharshan ;
Summerfield, Christopher ;
Botvinick, Matthew .
NEURON, 2017, 95 (02) :245-258
[25]  
Higgins Irina, 2016, Early visual concept learning with unsupervised deep learning
[26]   Efficient deep learning of image denoising using patch complexity local divide and deep conquer [J].
Hong, Inpyo ;
Hwang, Youngbae ;
Kim, Daeyoung .
PATTERN RECOGNITION, 2019, 96
[27]  
Hong ZW, 2018, ADV NEUR IN, V31
[28]  
Houthooft Rein, 2016, Advances in neural information processing systems, V29
[29]  
Jinnai Y, 2019, P INT C MACH LEARN
[30]  
KAELBLING LP, 1993, IJCAI-93, VOLS 1 AND 2, P1094