Grounding Language for Transfer in Deep Reinforcement Learning

被引:34
作者
Narasimhan, Karthik [1 ]
Barzilay, Regina [2 ]
Jaakkola, Tommi [2 ]
机构
[1] Princeton Univ, Dept Comp Sci, 35 Olden St, Princeton, NJ 08540 USA
[2] MIT, Comp Sci & Artificial Intelligence Lab, 32 Vassar St, Cambridge, MA 02139 USA
关键词
Autonomous agents - Deep learning;
D O I
10.1613/jair.1.11263
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we explore the utilization of natural language to drive transfer for reinforcement learning (RL). Despite the wide-spread application of deep RL techniques, learning generalized policy representations that work across domains remains a challenging problem. We demonstrate that textual descriptions of environments provide a compact intermediate channel to facilitate effective policy transfer. Specifically, by learning to ground the meaning of text to the dynamics of the environment such as transitions and rewards, an autonomous agent can effectively bootstrap policy learning on a new domain given its description. We employ a model-based RL approach consisting of a differentiable planning module, a model-free component and a factorized state representation to effectively use entity descriptions. Our model outperforms prior work on both transfer and multi-task scenarios in a variety of different environments. For instance, we achieve up to 14% and 11.5% absolute improvement over previously existing models in terms of average and initial rewards, respectively.
引用
收藏
页码:849 / 874
页数:26
相关论文
共 61 条
[1]  
Ammar HB, 2014, PR MACH LEARN RES, V32, P1206
[2]  
[Anonymous], AAAI
[3]  
[Anonymous], 2016, P 54 ANN M ASS COMP
[4]  
[Anonymous], A2T ATTEND ADAPT TRA
[5]  
[Anonymous], ICML
[6]  
[Anonymous], 2013, Transactions of the Association for Computational Linguistics (TACL)
[7]  
[Anonymous], 2017, 5 INT C LEARN REPR I
[8]  
[Anonymous], ARXIV170708616
[9]  
[Anonymous], ARXIV160604671
[10]  
[Anonymous], 2013, NeurIPS