Transfer Reinforcement Learning under Unobserved Contextual Information

被引:0
作者
Zhang, Yan [1 ]
Zavlanos, Michael M. [1 ]
机构
[1] Duke Univ, Dept Mech Engn & Mat Sci, Durham, NC 27706 USA
来源
2020 ACM/IEEE 11TH INTERNATIONAL CONFERENCE ON CYBER-PHYSICAL SYSTEMS (ICCPS 2020) | 2020年
关键词
Causal inference; transfer learning; reinforcement learning; causal bounds;
D O I
10.1109/ICCPS48487.2020.00015
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we study a transfer reinforcement learning problem where the state transitions and rewards are affected by the environmental context. Specifically, we consider a demonstrator agent that has access to a context-aware policy and can generate transition and reward data based on that policy. These data constitute the experience of the demonstrator. Then, the goal is to transfer this experience, excluding the underlying contextual information, to a learner agent that does not have access to the environmental context, so that they can learn a control policy using fewer samples. It is well known that, disregarding the causal effect of the contextual information, can introduce bias in the transition and reward models estimated by the learner, resulting in a learned suboptimal policy. To address this challenge, in this paper, we develop a method to obtain causal bounds on the transition and reward functions using the demonstrator's data, which we then use to obtain causal bounds on the value functions. Using these value function bounds, we propose new Q learning and UCB-Q learning algorithms that converge to the true value function without bias. We provide numerical experiments for robot motion planning problems that validate the proposed value function bounds and demonstrate that the proposed algorithms can effectively make use of the data from the demonstrator to accelerate the learning process of the learner.
引用
收藏
页码:75 / 86
页数:12
相关论文
共 25 条
  • [1] [Anonymous], 2017, PROC 34 INTERNAT C M
  • [2] [Anonymous], 2011, Advances in Neural Information Processing Systems
  • [3] Finite-time analysis of the multiarmed bandit problem
    Auer, P
    Cesa-Bianchi, N
    Fischer, P
    [J]. MACHINE LEARNING, 2002, 47 (2-3) : 235 - 256
  • [4] Barreto Andre, 2017, Advances in Neural Information Processing Systems, P4055
  • [5] Bertsimas D., 1997, INTRO LINEAR OPTIMIZ, V6
  • [6] A PROBABILISTIC PRODUCTION AND INVENTORY PROBLEM
    DEPENOUX, F
    [J]. MANAGEMENT SCIENCE, 1963, 10 (01) : 98 - 108
  • [7] DIETTERICH T. G, 2005, NIPS 2005 WORKSH TRA NIPS 2005 WORKSH TRA, V898, P3
  • [8] Dong K., 2019, arXiv
  • [9] Finn C, 2017, PR MACH LEARN RES, V70
  • [10] Hallak A., 2015, ARXIV150202259