AnyLoc: Towards Universal Visual Place Recognition

被引:54
作者
Keetha, Nikhil [1 ]
Mishra, Avneesh [2 ]
Karhade, Jay [1 ]
Jatavallabhula, Krishna Murthy [3 ]
Scherer, Sebastian [1 ]
Krishna, Madhava [2 ]
Garg, Sourav [4 ]
机构
[1] Carnegie Mellon Univ, Robot Inst, Pittsburgh, PA 15213 USA
[2] Int Inst Informat Technol, Hyderabad 500032, India
[3] MIT, Cambridge, MA 02139 USA
[4] Univ Adelaide, Australian Inst Machine Learning, Adelaide, SA 5000, Australia
关键词
Feature extraction; Training; Visualization; Task analysis; Vocabulary; Semantics; Robustness; Localization; recognition; deep learning for visual perception; vision-based navigation;
D O I
10.1109/LRA.2023.3343602
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Visual Place Recognition (VPR) is vital for robot localization. To date, the most performant VPR approaches are environment- and task-specific: while they exhibit strong performance in structured environments (predominantly urban driving), their performance degrades severely in unstructured environments, rendering most approaches brittle to robust real-world deployment. In this work, we develop a universal solution to VPR - a technique that works across a broad range of structured and unstructured environments (urban, outdoors, indoors, aerial, underwater, and subterranean environments) without any re-training or finetuning. We demonstrate that general-purpose feature representations derived from off-the-shelf self-supervised models with no VPR-specific training are the right substrate upon which to build such a universal VPR solution. Combining these derived features with unsupervised feature aggregation enables our suite of methods, AnyLoc, to achieve up to 4x significantly higher performance than existing approaches. We further obtain a 6% improvement in performance by characterizing the semantic properties of these features, uncovering unique domains which encapsulate datasets from similar environments. Our detailed experiments and analysis lay a foundation for building VPR solutions that may be deployed anywhere, anytime, and across anyview.
引用
收藏
页码:1286 / 1293
页数:8
相关论文
共 58 条
[1]   MixVPR: Feature Mixing for Visual Place Recognition [J].
Ali-bey, Amar ;
Chaib-draa, Brahim ;
Giguere, Philippe .
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, :2997-3006
[2]   GSV-CITIES: Toward appropriate supervised visual place recognition [J].
Ali-bey, Amar ;
Chaib-draa, Brahim ;
Giguere, Philippe .
NEUROCOMPUTING, 2022, 513 :194-203
[3]  
Amir S, 2022, Arxiv, DOI arXiv:2112.05814
[4]  
Arandjelovic R, 2018, IEEE T PATTERN ANAL, V40, P1437, DOI [10.1109/TPAMI.2017.2711011, 10.1109/CVPR.2016.572]
[5]   All about VLAD [J].
Arandjelovic, Relja ;
Zisserman, Andrew .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :1578-1585
[6]   Rethinking Visual Geo-localization for Large-Scale Applications [J].
Berton, Gabriele ;
Masone, Carlo ;
Caputo, Barbara .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :4868-4878
[7]   Deep Visual Geo-localization Benchmark [J].
Berton, Gabriele ;
Mereu, Riccardo ;
Trivigno, Gabriele ;
Masone, Carlo ;
Csurka, Gabriela ;
Sattler, Torsten ;
Caputo, Barbara .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :5386-5397
[8]   Adaptive-Attentive Geolocalization from few queries: a hybrid approach [J].
Berton, Gabriele Moreno ;
Paolicelli, Valerio ;
Masone, Carlo ;
Caputo, Barbara .
2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, :2917-2926
[9]   Eiffel Tower: A deep-sea underwater dataset for long-term visual localization [J].
Boittiaux, Clementin ;
Dune, Claire ;
Ferrera, Maxime ;
Arnaubec, Aurelien ;
Marxer, Ricard ;
Matabos, Marjolaine ;
Van Audenhaege, Loic ;
Hugel, Vincent .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2023, 42 (09) :689-699
[10]  
Bommasani R., 2021, arXiv, DOI [10.48550/arXiv.2108.07258, DOI 10.48550/ARXIV.2108.07258]