Keeping an Eye on Things: Deep Learned Features for Long-Term Visual Localization

被引:20
作者
Gridseth, Mona [1 ]
Barfoot, Timothy D. [1 ]
机构
[1] Univ Toronto, Inst Aerosp Studies, Toronto, ON M3H 5T6, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Localization; deep learning for visual perception; vision-based navigation;
D O I
10.1109/LRA.2021.3136867
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this letter, we learn visual features that we use to first build a map and then localize a robot driving autonomously across a full day of lighting change, including in the dark. We train a neural network to predict sparse keypoints with associated descriptors and scores that can be used together with a classical pose estimator for localization. Our training pipeline includes a differentiable pose estimator such that training can be supervised with ground truth poses from data collected earlier, in our case from 2016 and 2017 gathered with multi-experience Visual Teach and Repeat (VT&R). We insert the learned features into the existing VT&R pipeline to perform closed-loop path following in unstructured outdoor environments. We show successful path following across all lighting conditions despite the robot's map being constructed using daylight conditions. Moreover, we explore generalizability of the features by driving the robot across all lighting conditions in new areas not present in the feature training dataset. In all, we validated our approach with 35.5 km of autonomous path following experiments in challenging conditions.
引用
收藏
页码:1016 / 1023
页数:8
相关论文
共 35 条
  • [1] Barfoot Timothy D, 2017, State Estimation for Robotics
  • [2] Barnes D, 2020, IEEE INT CONF ROBOT, P9484, DOI [10.1109/icra40945.2020.9196835, 10.1109/ICRA40945.2020.9196835]
  • [3] SURF: Speeded up robust features
    Bay, Herbert
    Tuytelaars, Tinne
    Van Gool, Luc
    [J]. COMPUTER VISION - ECCV 2006 , PT 1, PROCEEDINGS, 2006, 3951 : 404 - 417
  • [4] Chen C.W., 2020, A survey on deep learning for localization and mapping: towards the age of spatial machine intelligence
  • [5] Christiansen Peter Hviid, 2019, ARXIV190704011
  • [6] SuperPoint: Self-Supervised Interest Point Detection and Description
    DeTone, Daniel
    Malisiewicz, Tomasz
    Rabinovich, Andrew
    [J]. PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 337 - 349
  • [7] D2-Net: A Trainable CNN for Joint Description and Detection of Local Features
    Dusmanu, Mihai
    Rocco, Ignacio
    Pajdla, Tomas
    Pollefeys, Marc
    Sivic, Josef
    Torii, Akihiko
    Sattler, Torsten
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 8084 - 8093
  • [8] Tight Integration of Feature-based Relocalization in Monocular Direct Visual Odometry
    Gladkova, Mariia
    Wang, Rui
    Zeller, Niclas
    Cremers, Daniel
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 9608 - 9614
  • [9] Gridseth M, 2020, IEEE INT CONF ROBOT, P1674, DOI [10.1109/ICRA40945.2020.9197362, 10.1109/icra40945.2020.9197362]
  • [10] Hirschmüller H, 2008, IEEE T PATTERN ANAL, V30, P328, DOI 10.1109/TPAMl.2007.1166