Scene Memory Transformer for Embodied Agents in Long-Horizon Tasks

被引:124
作者
Fang, Kuan [1 ]
Toshev, Alexander [2 ]
Li Fei-Fei [1 ]
Savarese, Silvio [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
[2] Google Brain, Mountain View, CA USA
来源
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) | 2019年
关键词
NAVIGATION; VISION;
D O I
10.1109/CVPR.2019.00063
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many robotic applications require the agent to perform long-horizon tasks in partially observable environments. In such applications, decision making at any step can depend on observations received far in the past. Hence, being able to properly memorize and utilize the long-term history is crucial. In this work, we propose a novel memory-based policy, named Scene Memory Transformer (SMT). The proposed policy embeds and adds each observation to a memory and uses the attention mechanism to exploit spatio-temporal dependencies. This model is generic and can be efficiently trained with reinforcement learning over long episodes. On a range of visual navigation tasks, SMT demonstrates superior performance to existing reactive and memory-based policies by a margin.
引用
收藏
页码:538 / 547
页数:10
相关论文
共 50 条
[1]  
Anderson P., 2018, CORR
[2]  
[Anonymous], 2013, INT C MACH LEARN ICM
[3]  
[Anonymous], 2017, CORR
[4]  
[Anonymous], 2017, ARXIV170905706
[5]  
[Anonymous], 2003, IEEE INT C COMPUTER
[6]  
[Anonymous], 2018, INT C MACH LEARN ICM
[7]  
[Anonymous], 2018, CoRR
[8]  
[Anonymous], 2015, ADV NEURAL INFORM PR
[9]  
[Anonymous], 2018, INT C LEARN REPR
[10]  
[Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.90