Steady State Analysis of Episodic Reinforcement Learning

被引:0
作者
Huang Bojun [1 ]
机构
[1] Rakuten Inst Technol, Tokyo, Japan
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020 | 2020年 / 33卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proves that the episodic learning environment of every finite-horizon decision task has a unique steady state under any behavior policy, and that the marginal distribution of the agent's input indeed converges to the steady-state distribution in essentially all episodic learning processes. This observation supports an interestingly reversed mindset against conventional wisdom: While the existence of unique steady states was often presumed in continual learning but considered less relevant in episodic learning, it turns out their existence is guaranteed for the latter. Based on this insight, the paper unifies episodic and continual RL around several important concepts that have been separately treated in these two RL formalisms. Practically, the existence of unique and approachable steady state enables a general way to collect data in episodic RL tasks, which the paper applies to policy gradient algorithms as a demonstration, based on a new steady-state policy gradient theorem. Finally, the paper also proposes and experimentally validates a perturbation method that facilitates rapid steady-state convergence in real-world RL tasks.
引用
收藏
页数:12
相关论文
共 35 条
  • [1] Brockman G., 2016, arXiv preprint arXiv:1606.01540
  • [2] Cuenca JA, 2014, ASIA PACIF MICROWAVE, P441
  • [3] Degris T., 2012, P 29 INT COF INT C M
  • [4] Duan Y, 2016, PR MACH LEARN RES, V48
  • [5] Fujimoto S, 2018, PR MACH LEARN RES, V80
  • [6] github, PYB GYMP
  • [7] Gu SX, 2016, PR MACH LEARN RES, V48
  • [8] Haarnoja Tuomas, 2018, P MACHINE LEARNING R, V80
  • [9] Kakade S, 2002, ADV NEUR IN, V14, P1531
  • [10] Lillicrap TP, 2015, ARXIV150902971