Visual Language Maps for Robot Navigation

被引:74
作者
Huang, Chenguang [1 ]
Mees, Oier [1 ]
Zeng, Andy [2 ]
Burgard, Wolfram [3 ]
机构
[1] Univ Freiburg, Freiburg, Germany
[2] Google Res, New York, NY USA
[3] Univ Technol Nuremberg, Nurnberg, Germany
来源
2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023) | 2023年
关键词
LOCALIZATION;
D O I
10.1109/ICRA48891.2023.10160969
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Grounding language to the visual observations of a navigating agent can be performed using off-the-shelf visual-language models pretrained on Internet-scale data (e.g., image captions). While this is useful for matching images to natural language descriptions of object goals, it remains disjoint from the process of mapping the environment, so that it lacks the spatial precision of classic geometric maps. To address this problem, we propose VLMaps, a spatial map representation that directly fuses pretrained visual-language features with a 3D reconstruction of the physical world. VLMaps can be autonomously built from video feed on robots using standard exploration approaches and enables natural language indexing of the map without additional labeled data. Specifically, when combined with large language models (LLMs), VLMaps can be used to (i) translate natural language commands into a sequence of open-vocabulary navigation goals (which, beyond prior work, can be spatial by construction, e.g., "in between the sofa and the TV" or "three meters to the right of the chair") directly localized in the map, and (ii) can be shared among multiple robots with different embodiments to generate new obstacle maps on-the-fly (by using a list of obstacle categories). Extensive experiments carried out in simulated and real-world environments show that VLMaps enable navigation according to more complex language instructions than existing methods. Videos are available at https://vlmaps.github.io.
引用
收藏
页码:10608 / 10615
页数:8
相关论文
共 50 条
  • [1] Ahn Michael, 2022, ARXIV220401691
  • [2] Anderson P., 2018, On evaluation of embodied navigation agents
  • [3] Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments
    Anderson, Peter
    Wu, Qi
    Teney, Damien
    Bruce, Jake
    Johnson, Mark
    Sunderhauf, Niko
    Reid, Ian
    Gould, Stephen
    van den Hengel, Anton
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3674 - 3683
  • [4] Anderson Peter, 2021, C ROBOT LEARNING, P671
  • [5] Borja-Diaz J., 2022, P IEEE INT C ROB AUT
  • [6] Chang Angel, 2017, INT C 3D VIS 3DV
  • [7] Chen Boyuan, 2022, ARXIV220909874
  • [8] Chen Mark., 2021, arXiv preprint arXiv:2107.03374
  • [9] Contextual cueing: Implicit learning and memory of visual context guides spatial attention
    Chun, MM
    Jian, YH
    [J]. COGNITIVE PSYCHOLOGY, 1998, 36 (01) : 28 - 71
  • [10] Endres F, 2012, IEEE INT CONF ROBOT, P1691, DOI 10.1109/ICRA.2012.6225199