(LC)2: LiDAR-Camera Loop Constraints for Cross-Modal Place Recognition

被引:5
作者
Lee, Alex Junho [1 ]
Song, Seungwon [1 ]
Lim, Hyungtae [2 ]
Lee, Woojoo [2 ]
Myung, Hyun [2 ]
机构
[1] Hyundai Motor Co, Robot Lab, Res & Dev Div, Uiwang 16082, South Korea
[2] Korea Adv Inst Sci & Technol, Sch Elect Engn, Daejeon, South Korea
关键词
Point cloud compression; Laser radar; Location awareness; Visualization; Databases; Image recognition; Robots; Localization; sensor fusion; deep learning methods; representation learning;
D O I
10.1109/LRA.2023.3268848
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Localization has been a challenging task for autonomous navigation. A loop detection algorithm must overcome environmental changes for the place recognition and re-localization of robots. Therefore, deep learning has been extensively studied for the consistent transformation of measurements into localization descriptors. Street view images are easily accessible; however, images are vulnerable to appearance changes. LiDAR can robustly provide precise structural information. However, constructing a point cloud database is expensive, and point clouds exist only in limited places. Different from previous works that train networks to produce shared embedding directly between the 2D image and 3D point cloud, we transform both data into 2.5D depth images for matching. In this work, we propose a novel cross-matching method, called (LC)(2), for achieving LiDAR localization without a prior point cloud map. To this end, LiDAR measurements are expressed in the form of range images before matching them to reduce the modality discrepancy. Subsequently, the network is trained to extract localization descriptors from disparity and range images. Next, the best matches are employed as a loop factor in a pose graph. Using public datasets that include multiple sessions in significantly different lighting conditions, we demonstrated that LiDAR-based navigation systems could be optimized from image databases and vice versa.
引用
收藏
页码:3589 / 3596
页数:8
相关论文
共 44 条
  • [1] Arandjelovic R, 2018, IEEE T PATTERN ANAL, V40, P1437, DOI [10.1109/CVPR.2016.572, 10.1109/TPAMI.2017.2711011]
  • [2] Aggregating Deep Convolutional Features for Image Retrieval
    Babenko, Artem
    Lempitsky, Victor
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1269 - 1277
  • [3] Bingyi Cao, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12365), P726, DOI 10.1007/978-3-030-58565-5_43
  • [4] Caselitz T, 2016, 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), P1926, DOI 10.1109/IROS.2016.7759304
  • [5] Cattaneo D, 2020, IEEE INT CONF ROBOT, P4365, DOI [10.1109/ICRA40945.2020.9196859, 10.1109/icra40945.2020.9196859]
  • [6] Cattaneo D, 2019, IEEE INT C INTELL TR, P1283, DOI [10.1109/ITSC.2019.8917470, 10.1109/itsc.2019.8917470]
  • [7] HyperMap: Compressed 3D Map for Monocular Camera Registration
    Chang, Ming-Fang
    Mangelson, Joshua
    Kaess, Michael
    Lucey, Simon
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 11739 - 11745
  • [8] Range Image-based LiDAR Localization for Autonomous Vehicles
    Chen, Xieyuanli
    Vizzo, Ignacio
    Labe, Thomas
    Behley, Jens
    Stachniss, Cyrill
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 5802 - 5808
  • [9] BRM Localization: UAV Localization in GNSS-Denied Environments Based on Matching of Numerical Map and UAV Images
    Choi, Junho
    Myung, Hyun
    [J]. 2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 4537 - 4544
  • [10] Cole E., 2022, P IEEECVF C COMPUTER, P14755