Learning Navigational Visual Representations with Semantic Map Supervision

被引:10
作者
Hong, Yicong [1 ,2 ]
Zhou, Yang [1 ]
Zhang, Ruiyi [1 ]
Dernoncourt, Franck [1 ]
Bui, Trung [1 ]
Gould, Stephen [2 ]
Tan, Hao [1 ]
机构
[1] Adobe Res, San Francisco, CA 94107 USA
[2] Australian Natl Univ, Canberra, ACT, Australia
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV | 2023年
关键词
LANGUAGE;
D O I
10.1109/ICCV51070.2023.00284
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Being able to perceive the semantics and the spatial structure of the environment is essential for visual navigation of a household robot. However, most existing works only employ visual backbones pre-trained either with independent images for classification or with self-supervised learning methods to adapt to the indoor navigation domain, neglecting the spatial relationships that are essential to the learning of navigation. Inspired by the behavior that humans naturally build semantically and spatially meaningful cognitive maps in their brains during navigation, in this paper, we propose a novel navigational-specific visual representation learning method by contrasting the agent's egocentric views and semantic maps (Ego2- Map). We apply the visual transformer as the backbone encoder and train the model with data collected from the large-scale Habitat-Matterport3D environments. Ego2-Map learning transfers the compact and rich information from a map, such as objects, structure and transition, to the agent's egocentric representations for navigation. Experiments show that agents using our learned representations on object-goal navigation outperform recent visual pre-training methods. Moreover, our representations significantly improve vision-andlanguage navigation in continuous environments for both high-level and low-level action spaces, achieving new state-of-the-art results of 47% SR and 41% SPL on the test server.
引用
收藏
页码:3032 / 3044
页数:13
相关论文
共 105 条
[51]  
Ku A, 2020, PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), P4392
[52]  
Li C., 2022, C ROBOT LEARNING, P455
[53]  
Li J., 2022, CVPR, P15407
[54]  
Li Jialu, 2023, ARXIV230519195
[55]   Microsoft COCO: Common Objects in Context [J].
Lin, Tsung-Yi ;
Maire, Michael ;
Belongie, Serge ;
Hays, James ;
Perona, Pietro ;
Ramanan, Deva ;
Dollar, Piotr ;
Zitnick, C. Lawrence .
COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 :740-755
[56]  
Liu C., 2021, P IEEE CVF INT C COM, P1644
[57]  
Loshchilov I., 2019, P INT C LEARN REPR
[58]  
Luo Haokuan, 2022, ARXIV220307359
[59]   Improving Vision-and-Language Navigation with Image-Text Pairs from the Web [J].
Majumdar, Arjun ;
Shrivastava, Ayush ;
Lee, Stefan ;
Anderson, Peter ;
Parikh, Devi ;
Batra, Dhruv .
COMPUTER VISION - ECCV 2020, PT VI, 2020, 12351 :259-274
[60]  
Maksymets O., 2021, P IEEE CVF INT C COM, P15374