Attention Based Natural Language Grounding by Navigating Virtual Environment

被引:7
作者
Sinha, Abhishek [1 ]
Akilesh, B. [1 ,2 ]
Sarkar, Mausoom [1 ]
Krishnamurthy, Balaji [1 ]
机构
[1] Adobe Syst, Noida, India
[2] Univ Montreal, Mila, Montreal, PQ, Canada
来源
2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2019年
关键词
D O I
10.1109/WACV.2019.00031
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this work, we focus on the problem of grounding language by training an agent to follow a set of natural language instructions and navigate to a target object in an environment. The agent receives visual information through raw pixels and a natural language instruction telling what task needs to be achieved and is trained in an end-to-end way. We develop an attention mechanism for multi-modal fusion of visual and textual modalities that allows the agent to learn to complete the task and achieve language grounding. Our experimental results show that our attention mechanism outperforms the existing multi-modal fusion mechanisms proposed for both 2D and 3D environments in order to solve the above-mentioned task in terms of both speed and success rate. We show that the learnt textual representations are semantically meaningful as they follow vector arithmetic in the embedding space. The effectiveness of our attention approach over the contemporary fusion mechanisms is also highlighted from the textual embeddings learnt by the different approaches. We also show that our model generalizes effectively to unseen scenarios and exhibit zero-shot generalization capabilities both in 2D and 3D environments. The code for our 2D environment as well as the models that we developed for both 2D and 3D are available at https://github.com/rl-lang-grounding/rl-lang-ground.
引用
收藏
页码:236 / 244
页数:9
相关论文
共 21 条
  • [1] [Anonymous], 2015, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2015.123
  • [2] Artzi Yoav, 2013, Transactions of the Association for Computational Linguistics, V1, P49, DOI DOI 10.1162/TACLA.00209
  • [3] Bengio Y., 2017, arXiv
  • [4] Chao C, 2011, IEEE INT C DEV LEARN, V2, P1
  • [5] Chaplot D., DEEPRL GROUNDING
  • [6] Chaplot Devendra Singh, 2017, ARXIV170607230
  • [7] Chen David L., 2011, AAAI
  • [8] Guadarrama S, 2013, IEEE INT C INT ROBOT, P1640, DOI 10.1109/IROS.2013.6696569
  • [9] Hill F., 2018, ARXIV171009867
  • [10] Kempka M, 2016, IEEE CONF COMPU INTE