Instance-Level Semantic Maps for Vision Language Navigation

被引:1
作者
Nanwani, Laksh [1 ]
Agarwal, Anmol [1 ]
Jain, Kanishk [1 ]
Prabhakar, Raghav [1 ]
Monis, Aaron [1 ]
Mathur, Aditya [1 ]
Jatavallabhula, Krishna Murthy [2 ]
Hafez, A. H. Abdul [3 ]
Gandhi, Vineet [1 ]
Krishna, K. Madhava [1 ]
机构
[1] Int Inst Informat Technol, KCIS, Hyderabad, Telangana, India
[2] MIT, CSAIL, Cambridge, MA USA
[3] Hasan Kalyoncu Univ, Gaziantep, Turkiye
来源
2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN | 2023年
关键词
D O I
10.1109/RO-MAN57019.2023.10309534
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans have a natural ability to perform semantic associations with the surrounding objects in the environment. This allows them to create a mental map of the environment, allowing them to navigate on-demand when given linguistic instructions. A natural goal in Vision Language Navigation (VLN) research is to impart autonomous agents with similar capabilities. Recent works take a step towards this goal by creating a semantic spatial map representation of the environment without any labeled data. However, their representations are limited for practical applicability as they do not distinguish between different instances of the same object. In this work, we address this limitation by integrating instance-level information into spatial map representation using a community detection algorithm and utilizing word ontology learned by large language models (LLMs) to perform open-set semantic associations in the mapping representation. The resulting map representation improves the navigation performance by two-fold (233%) on realistic language commands with instance-specific descriptions compared to the baseline. We validate the practicality and effectiveness of our approach through extensive qualitative and quantitative experiments.
引用
收藏
页码:507 / 512
页数:6
相关论文
共 17 条
  • [1] Chang AE, 2017, Arxiv, DOI [arXiv:1709.06158, 10.48550/arXiv.1709.06158, DOI 10.48550/ARXIV.1709.06158]
  • [2] Chen BY, 2022, Arxiv, DOI arXiv:2209.09874
  • [3] Masked-attention Mask Transformer for Universal Image Segmentation
    Cheng, Bowen
    Misra, Ishan
    Schwing, Alexander G.
    Kirillov, Alexander
    Girdhar, Rohit
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 1280 - 1289
  • [4] Das Abhishek, 2018, P C ROB LEARN, P53
  • [5] Sequence-Agnostic Multi-Object Navigation
    Gireesh, Nandiraju
    Agrawal, Ayush
    Datta, Ahana
    Banerjee, Snehasis
    Sridharan, Mohan
    Bhowmick, Brojeshwar
    Krishna, Madhava
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9573 - 9579
  • [6] He KM, 2017, IEEE I CONF COMP VIS, P2980, DOI [10.1109/ICCV.2017.322, 10.1109/TPAMI.2018.2844175]
  • [7] Visual Language Maps for Robot Navigation
    Huang, Chenguang
    Mees, Oier
    Zeng, Andy
    Burgard, Wolfram
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 10608 - 10615
  • [8] Jain K, 2022, Arxiv, DOI [arXiv:2209.11972, 10.48550/arXiv.2209.11972, DOI 10.48550/ARXIV.2209.11972]
  • [9] Li Boyi, 2022, INT C LEARN REPR
  • [10] Microsoft COCO: Common Objects in Context
    Lin, Tsung-Yi
    Maire, Michael
    Belongie, Serge
    Hays, James
    Perona, Pietro
    Ramanan, Deva
    Dollar, Piotr
    Zitnick, C. Lawrence
    [J]. COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 : 740 - 755