Provable Benefits of Representational Transfer in Reinforcement Learning

被引:0
作者
Agarwal, Alekh [1 ]
Song, Yuda [2 ]
Sun, Wen [3 ]
Wang, Kaiwen [3 ]
Wang, Mengdi [4 ]
Zhang, Xuezhou [4 ]
机构
[1] Google, Mountain View, CA 94043 USA
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Cornell Univ, Ithaca, NY 14853 USA
[4] Princeton Univ, Princeton, NJ 08544 USA
来源
THIRTY SIXTH ANNUAL CONFERENCE ON LEARNING THEORY, VOL 195 | 2023年 / 195卷
基金
美国国家科学基金会;
关键词
Transfer Learning; Low-Rank MDPs; Reinforcement Learning Theory;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study the problem of representational transfer in RL, where an agent first pretrains in a number of source tasks to discover a shared representation, which is subsequently used to learn a good policy in a target task. We propose a new notion of task relatedness between source and target tasks, and develop a novel approach for representational transfer under this assumption. Concretely, we show that given a generative access to source tasks, we can discover a representation, using which subsequent linear RL techniques quickly converge to a near-optimal policy in the target task. The sample complexity is close to knowing the ground truth features in the target task, and comparable to prior representation learning results in the source tasks. We complement our positive results with lower bounds without generative access, and validate our findings with empirical evaluation on rich observation MDPs that require deep exploration. In our experiments, we observe speed up in learning in the target by pre-training, and also validate the need for generative access in source tasks.
引用
收藏
页数:74
相关论文
共 55 条
  • [1] Agarwal A., 2020, NEURIPS, V33, P20095
  • [2] Agarwal A., 2019, Reinforcement learning: Theory and algorithms
  • [3] Arora Sanjeev, 2020, PMLR, P367
  • [4] Ayoub A., 2020, P 37 INT C MACHINE L, P463
  • [5] Barreto A, 2017, ADV NEUR IN, V30
  • [6] Bengio Y., 2009, INT C MACH LEARN
  • [7] Blier L., 2021, ARXIV
  • [8] Brunskill E, 2014, PR MACH LEARN RES, V32, P316
  • [9] Brunskill Emma, 2013, ARXIV
  • [10] Multitask learning
    Caruana, R
    [J]. MACHINE LEARNING, 1997, 28 (01) : 41 - 75