A New Path: Scaling Vision-and-Language Navigation with Synthetic Instructions and Imitation Learning

被引:11
作者
Kamath, Aishwarya [1 ,2 ]
Anderson, Peter [2 ]
Wang, Su [2 ]
Koh, Jing Yu [2 ,3 ]
Ku, Alexander [2 ]
Waters, Austin [2 ]
Yang, Yinfei [2 ,4 ]
Baldridge, Jason [2 ]
Parekh, Zarana [2 ]
机构
[1] NYU, New York, NY 10003 USA
[2] Google Res, Mountain View, CA 94043 USA
[3] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[4] Apple, Cupertino, CA USA
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.01041
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies in Vision-and-Language Navigation (VLN) train RL agents to execute natural-language navigation instructions in photorealistic environments, as a step towards robots that can follow human instructions. However, given the scarcity of human instruction data and limited diversity in the training environments, these agents still struggle with complex language grounding and spatial language understanding. Pretraining on large text and image-text datasets from the web has been extensively explored but the improvements are limited. We investigate large-scale augmentation with synthetic instructions. We take 500+ indoor environments captured in densely-sampled 360. panoramas, construct navigation trajectories through these panoramas, and generate a visually-grounded instruction for each trajectory using Marky [63], a high-quality multilingual navigation instruction generator. We also synthesize image observations from novel viewpoints using an image-to-image GAN [27]. The resulting dataset of 4.2M instruction-trajectory pairs is two orders of magnitude larger than existing human-annotated datasets, and contains a wider variety of environments and viewpoints. To efficiently leverage data at this scale, we train a simple transformer agent with imitation learning. On the challenging RxR dataset, our approach outperforms all existing RL agents, improving the state-of-the-art NDTW from 71.1 to 79.1 in seen environments, and from 64.6 to 66.8 in unseen test environments. Our work points to a new path to improving instruction-following agents, emphasizing large-scale training on near-human quality synthetic instructions.
引用
收藏
页码:10813 / 10823
页数:11
相关论文
共 73 条
  • [1] Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments
    Anderson, Peter
    Wu, Qi
    Teney, Damien
    Bruce, Jake
    Johnson, Mark
    Sunderhauf, Niko
    Reid, Ian
    Gould, Stephen
    van den Hengel, Anton
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3674 - 3683
  • [2] Anderson Peter, 2021, CORL
  • [3] Andreas J., 2015, EMNLP, P1165
  • [4] [Anonymous], 2018, NEURIPS
  • [5] [Anonymous], 2016, ICML
  • [6] Artzi Yoav, 2013, Trans. Assoc. Comput. Linguist., V1, P49, DOI [10.1162/tacla00209, DOI 10.1162/TACLA00209]
  • [7] Bisk Y, 2020, PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), P8718
  • [8] Matterport3D: Learning from RGB-D Data in Indoor Environments
    Chang, Angel
    Dai, Angela
    Funkhouser, Thomas
    Halber, Maciej
    Niessner, Matthias
    Savva, Manolis
    Song, Shuran
    Zeng, Andy
    Zhang, Yinda
    [J]. PROCEEDINGS 2017 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2017, : 667 - 676
  • [9] Chen D.L., 2011, AAAI, P859, DOI DOI 10.1109/APPEEC.2011.5748908
  • [10] Chen SZ, 2021, ADV NEUR IN, V34