Block Contextual MDPs for Continual Learning

被引:0
作者
Sodhani, Shagun [1 ]
Meier, Franziska [1 ]
Pineau, Joelle [1 ]
Zhang, Amy [1 ,2 ]
机构
[1] Facebook AI Res, Menlo Pk, CA 94025 USA
[2] Univ Calif Berkeley, Berkeley, CA USA
来源
LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 168 | 2022年 / 168卷
关键词
Reinforcement Learning; MDP; Block Contextual MDP; Continual Learning;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In reinforcement learning (RL), when defining a Markov Decision Process (MDP), the environment dynamics are implicitly assumed to be stationary. This assumption of stationarity, while simplifying, can be unrealistic in many scenarios. In the continual reinforcement learning scenario, the sequence of tasks is another source of nonstationarity. In this work, we propose to examine this continual reinforcement learning setting through the block contextual MDP (BC-MDP) framework, which enables us to relax the assumption of stationarity. This framework challenges RL algorithms to handle both nonstationarity and rich observation settings and, by additionally leveraging smoothness properties, enables us to study generalization bounds for this setting. Finally, we take inspiration from adaptive control to propose a novel algorithm that addresses the challenges introduced by this more realistic BC-MDP setting, allows for zero-shot adaptation at evaluation time, and achieves strong performance on several nonstationary environments. (1).
引用
收藏
页数:16
相关论文
共 57 条
  • [1] Ajay A, 2019, IEEE INT CONF ROBOT, P3217, DOI [10.1109/ICRA.2019.8794358, 10.1109/icra.2019.8794358]
  • [2] Aljundi R, 2019, Arxiv, DOI arXiv:1908.04742
  • [3] Astrom Karl Johan, 1965, P IFAC C SELF AD CON
  • [4] DYNAMIC PROGRAMMING
    BELLMAN, R
    [J]. SCIENCE, 1966, 153 (3731) : 34 - &
  • [5] Chaudhry A., 2019, arXiv
  • [6] Chen T, 2018, ADV NEUR IN, V31
  • [7] Chiuso A, 2019, ANNU REV CONTR ROBOT, V2, P281, DOI [10.1146/annurev-control-053018023744, 10.1146/annurev-control-053018-023744]
  • [8] Doob J. L., 1953, Stochastic Processes
  • [9] Du Simon S., 2019, INT C MACHINE LEARNI, V97, P1665
  • [10] Finn C, 2017, PR MACH LEARN RES, V70