Learning Matchable Image Transformations for Long-Term Metric Visual Localization

被引:10
作者
Clement, Lee [1 ]
Gridseth, Mona [2 ]
Tomasi, Justin [1 ]
Kelly, Jonathan [1 ]
机构
[1] Univ Toronto, Inst Aerosp Studies UTIAS, Space & Terr Autonomous Robot Syst STARS Lab, Toronto, ON M3H 5T6, Canada
[2] UTIAS, ASRL, N York, ON M3H 5T6, Canada
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2020年 / 5卷 / 02期
关键词
Deep learning in robotics and automation; visual learning; visual-based navigation; localization; NAVIGATION; VISION; TEACH;
D O I
10.1109/LRA.2020.2967659
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Long-term metric self-localization is an essential capability of autonomous mobile robots, but remains challenging for vision-based systems due to appearance changes caused by lighting, weather, or seasonal variations. While experience-based mapping has proven to be an effective technique for bridging the 'appearance gap,' the number of experiences required for reliable metric localization over days or months can be very large, and methods for reducing the necessary number of experiences are needed for this approach to scale. Taking inspiration from color constancy theory, we learn a nonlinear RGB-to-grayscale mapping that explicitly maximizes the number of inlier feature matches for images captured under different lighting and weather conditions, and use it as a pre-processing step in a conventional single-experience localization pipeline to improve its robustness to appearance change. We train this mapping by approximating the target non-differentiable localization pipeline with a deep neural network, and find that incorporating a learned low-dimensional context feature can further improve cross-appearance feature matching. Using synthetic and real-world datasets, we demonstrate substantial improvements in localization performance across day-night cycles, enabling continuous metric localization over a 30-hour period using a single mapping experience, and allowing experience-based localization to scale to long deployments with dramatically reduced data requirements.
引用
收藏
页码:1492 / 1499
页数:8
相关论文
共 39 条
  • [1] [Anonymous], P C NEUR INF PROC SY
  • [2] [Anonymous], 2017, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2017.244
  • [3] [Anonymous], 2018, SPRINGER P ADV ROBOT, DOI DOI 10.1007/978-3-319-67361-527
  • [4] Anoosheh A, 2019, IEEE INT CONF ROBOT, P5958, DOI [10.1109/icra.2019.8794387, 10.1109/ICRA.2019.8794387]
  • [5] Speeded-Up Robust Features (SURF)
    Bay, Herbert
    Ess, Andreas
    Tuytelaars, Tinne
    Van Gool, Luc
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2008, 110 (03) : 346 - 359
  • [6] Experience-based navigation for long-term localisation
    Churchill, Winston
    Newman, Paul
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (14) : 1645 - 1661
  • [7] How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change
    Clement, Lee
    Kelly, Jonathan
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (03): : 2447 - 2454
  • [8] Robust Monocular Visual Teach and Repeat Aided by Local Ground Planarity and Color-constant Imagery
    Clement, Lee
    Kelly, Jonathan
    Barfoot, Timothy D.
    [J]. JOURNAL OF FIELD ROBOTICS, 2017, 34 (01) : 74 - 97
  • [9] Corke P, 2013, IEEE INT C INT ROBOT, P2085, DOI 10.1109/IROS.2013.6696648
  • [10] Engel J, 2015, IEEE INT C INT ROBOT, P1935, DOI 10.1109/IROS.2015.7353631