Keeping an Eye on Things: Deep Learned Features for Long-Term Visual Localization

被引:20
作者
Gridseth, Mona [1 ]
Barfoot, Timothy D. [1 ]
机构
[1] Univ Toronto, Inst Aerosp Studies, Toronto, ON M3H 5T6, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Localization; deep learning for visual perception; vision-based navigation;
D O I
10.1109/LRA.2021.3136867
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this letter, we learn visual features that we use to first build a map and then localize a robot driving autonomously across a full day of lighting change, including in the dark. We train a neural network to predict sparse keypoints with associated descriptors and scores that can be used together with a classical pose estimator for localization. Our training pipeline includes a differentiable pose estimator such that training can be supervised with ground truth poses from data collected earlier, in our case from 2016 and 2017 gathered with multi-experience Visual Teach and Repeat (VT&R). We insert the learned features into the existing VT&R pipeline to perform closed-loop path following in unstructured outdoor environments. We show successful path following across all lighting conditions despite the robot's map being constructed using daylight conditions. Moreover, we explore generalizability of the features by driving the robot across all lighting conditions in new areas not present in the feature training dataset. In all, we validated our approach with 35.5 km of autonomous path following experiments in challenging conditions.
引用
收藏
页码:1016 / 1023
页数:8
相关论文
共 35 条
  • [11] Kasper M, 2020, PR MACH LEARN RES, V155, P1736
  • [12] PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization
    Kendall, Alex
    Grimes, Matthew
    Cipolla, Roberto
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2938 - 2946
  • [13] Kingma D P., 2014, P INT C LEARN REPR
  • [14] Camera Relocalization by Computing Pairwise Relative Poses Using Convolutional Neural Network
    Laskar, Zakaria
    Melekhov, Iaroslav
    Kalia, Surya
    Kannala, Juho
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 920 - 929
  • [15] ASLFeat: Learning Local Features of Accurate Shape and Localization
    Luo, Zixin
    Zhou, Lei
    Bai, Xuyang
    Chen, Hongkai
    Zhang, Jiahui
    Yao, Yao
    Li, Shiwei
    Fang, Tian
    Quan, Long
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6588 - 6597
  • [16] Taking a Deeper Look at the Inverse Compositional Algorithm
    Lv, Zhaoyang
    Dellaert, Frank
    Rehg, James M.
    Geiger, Andreas
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4576 - 4585
  • [17] Ono Y, 2018, ADV NEUR IN, V31
  • [18] Paton M, 2016, 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), P1918, DOI 10.1109/IROS.2016.7759303
  • [19] Paton Michael, 2017, SPRINGER P ADV ROBOT, DOI [10.1007/978-3-319-67361-5_27, DOI 10.1007/978-3-319-67361-527]
  • [20] Piasco N, 2019, IEEE INT CONF ROBOT, P9094, DOI [10.1109/icra.2019.8794221, 10.1109/ICRA.2019.8794221]