I2D-Loc: Camera localization via image to LiDAR depth flow

被引:14
作者
Chen, Kuangyi [1 ]
Yu, Huai [1 ]
Yang, Wen [1 ]
Yu, Lei [1 ]
Scherer, Sebastian [2 ]
Xia, Gui-Song [3 ]
机构
[1] Wuhan Univ, Sch Elect Informat, Wuhan 430072, Peoples R China
[2] Carnegie Mellon Univ, Robot Inst, Pittsburgh, PA 15213 USA
[3] Wuhan Univ, Sch Comp Sci, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Camera localization; 2D-3D registration; Flow estimation; Depth completion; Neural network; LINE; POSE;
D O I
10.1016/j.isprsjprs.2022.10.009
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
Accurate camera localization in existing LiDAR maps is promising since it potentially allows exploiting strengths of both LiDAR-based and camera-based methods. However, effective methods that robustly address appearance and modality differences for 2D-3D localization are still missing. To overcome these problems, we propose the I2D-Loc, a scene-agnostic and end-to-end trainable neural network that estimates the 6-DoF pose from an RGB image to an existing LiDAR map with local optimization on an initial pose. Specifically, we first project the LiDAR map to the image plane according to a rough initial pose and utilize a depth completion algorithm to generate a dense depth image. We further design a confidence map to weight the features extracted from the dense depth to get a more reliable depth representation. Then, we propose to utilize a neural network to estimate the correspondence flow between depth and RGB images. Finally, we utilize the BPnP algorithm to estimate the 6-DoF pose, calculating the gradients of pose error and optimizing the front-end network parameters. Moreover, by decoupling the intrinsic camera parameters out of the end-to-end training process, I2D-Loc can be generalized to the images with different intrinsic parameters. Experiments on KITTI, Argoverse, and Lyft5 datasets demonstrate that the I2D-Loc can achieve centimeter localization performance. The source code, dataset, trained models, and demo videos are released at https://levenberg.github.io/I2D-Loc/.
引用
收藏
页码:209 / 221
页数:13
相关论文
共 54 条
[1]   BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images [J].
Acharya, Debaditya ;
Khoshelham, Kourosh ;
Winter, Stephan .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2019, 150 :245-258
[2]   RelocNet: Continuous Metric Learning Relocalisation Using Neural Nets [J].
Balntas, Vassileios ;
Li, Shuda ;
Prisacariu, Victor .
COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 :782-799
[3]  
Behley J, 2018, ROBOTICS: SCIENCE AND SYSTEMS XIV
[4]   Visual Camera Re-Localization From RGB and RGB-D Images Using DSAC [J].
Brachmann, Eric ;
Rother, Carsten .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) :5847-5865
[5]   Expert Sample Consensus Applied to Camera Re-Localization [J].
Brachmann, Eric ;
Rother, Carsten .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :7524-7533
[6]   Learning Less is More-6D Camera Localization via 3D Surface Regression [J].
Brachmann, Eric ;
Rother, Carsten .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4654-4662
[7]   DSAC - Differentiable RANSAC for Camera Localization [J].
Brachmann, Eric ;
Krull, Alexander ;
Nowozin, Sebastian ;
Shotton, Jamie ;
Michel, Frank ;
Gumhold, Stefan ;
Rother, Carsten .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2492-2500
[8]   The Alignment of the Spheres: Globally-Optimal Spherical Mixture Alignment for Camera Pose Estimation [J].
Campbell, Dylan ;
Petersson, Lars ;
Kneip, Laurent ;
Li, Hongdong ;
Gould, Stephen .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11788-11798
[9]   Globally-Optimal Inlier Set Maximisation for Camera Pose and Correspondence Estimation [J].
Campbell, Dylan ;
Petersson, Lars ;
Kneip, Laurent ;
Li, Hongdong .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (02) :328-342
[10]  
Caselitz T, 2016, 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), P1926, DOI 10.1109/IROS.2016.7759304