Reward Space Noise for Exploration in Deep Reinforcement Learning

被引:4
作者
Sun, Chuxiong [1 ]
Wang, Rui [1 ]
Li, Qian [2 ]
Hu, Xiaohui [3 ]
机构
[1] Chinese Acad Sci, Inst Software, 4,South Fourth St, Beijing, Peoples R China
[2] Univ Technol Sydney, Coll Comp Sci & Technol, Sydney, NSW 2007, Australia
[3] Chinese Acad Sci, Inst Software, 4 South Fourth St, Beijing, Peoples R China
关键词
Reinforcement learning; exploration-exploitation; deep learning;
D O I
10.1142/S0218001421520133
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A fundamental challenge for reinforcement learning (RL) is how to achieve efficient exploration in initially unknown environments. Most state-of-the-art RL algorithms leverage action space noise to drive exploration. The classical strategies are computationally efficient and straightforward to implement. However, these methods may fail to perform effectively in complex environments. To address this issue, we propose a novel strategy named reward space noise (RSN) for farsighted and consistent exploration in RL. By introducing the stochasticity from reward space, we are able to change agent's understanding about environment and perturb its behaviors. We find that the simple RSN can achieve consistent exploration and scale to complex domains without intensive computational cost. To demonstrate the effectiveness and scalability of the proposed method, we implement a deep Q-learning agent with reward noise and evaluate its exploratory performance on a set of Atari games which are challenging for the naive epsilon-greedy strategy. The results show that reward noise outperforms action noise in most games and performs comparably in others. Concretely, we found that in the early training, the best exploratory performance of reward noise is obviously better than action noise, which demonstrates that the reward noise can quickly explore the valuable states and aid in finding the optimal policy. Moreover, the average scores and learning efficiency of reward noise are also higher than action noise through the whole training, which indicates that the reward noise can generate more stable and consistent performance.
引用
收藏
页数:21
相关论文
共 41 条
[1]  
Achiam J., 2017, CoRR
[2]  
Andrychowicz, 2017, P INT C LEARNING REP
[3]  
[Anonymous], 2017, Evolution strategies as a scalable alternative to reinforcement learning
[4]  
[Anonymous], 2018, Advances in neural information processing systems
[5]  
Azizzadenesheli K, 2018, 2018 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA)
[6]  
Badia A. P., 2020, ARXIV200313350
[7]  
Badia A.P., 2020, ICLR 2020 8 INT C LE
[8]  
Bellemare MG, 2016, ADV NEUR IN, V29
[9]   The Arcade Learning Environment: An Evaluation Platform for General Agents [J].
Bellemare, Marc G. ;
Naddaf, Yavar ;
Veness, Joel ;
Bowling, Michael .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2013, 47 :253-279
[10]  
Burda Y., 2018, INT C LEARN REPR