CloudVision: DNN-based Visual Localization of Autonomous Robots using Prebuilt LiDAR Point Cloud

被引:2
作者
Yudin, Evgeny [1 ]
Karpyshev, Pavel [1 ]
Kurenkov, Mikhail [1 ]
Savinykh, Alena [1 ]
Potapov, Andrei [1 ]
Kruzhkov, Evgeny [1 ]
Tsetserukou, Dzmitry [1 ]
机构
[1] Skolkovo Inst Sci & Technol, ISR Lab, Moscow, Russia
来源
2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING | 2023年
关键词
Autonomous robot; Visual localization; Mapping; Deep Learning; Sensors Fusion; LiDAR map;
D O I
10.1109/VTC2023-Spring57618.2023.10199461
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this study, we propose a novel visual localization approach to accurately estimate six degrees of freedom (6DoF) poses of the robot within the 3D LiDAR map based on visual data from an RGB camera. The 3D map is obtained utilizing an advanced LiDAR-based simultaneous localization and mapping (SLAM) algorithm capable of collecting a precise sparse map. The features extracted from the camera images are compared with the points of the 3D map, and then the geometric optimization problem is being solved to achieve precise visual localization. Our approach allows employing a scout robot equipped with an expensive LiDAR only once - for mapping of the environment, and multiple operational robots with only RGB cameras onboard - for performing mission tasks, with the localization accuracy higher than common camera-based solutions. The proposed method was tested on the custom dataset collected in the Skolkovo Institute of Science and Technology (Skoltech). During the process of assessing the localization accuracy, we managed to achieve centimeter-level accuracy; the median translation error was as low as 1.3 cm. The precise positioning achieved with only cameras makes possible the usage of autonomous mobile robots to solve the most complex tasks that require high localization accuracy.
引用
收藏
页数:6
相关论文
共 33 条
[1]   NetVLAD: CNN architecture for weakly supervised place recognition [J].
Arandjelovic, Relja ;
Gronat, Petr ;
Torii, Akihiko ;
Pajdla, Tomas ;
Sivic, Josef .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :5297-5307
[2]   A LiDAR/Visual SLAM Backend with Loop Closure Detection and Graph Optimization [J].
Chen, Shoubin ;
Zhou, Baoding ;
Jiang, Changhui ;
Xue, Weixing ;
Li, Qingquan .
REMOTE SENSING, 2021, 13 (14)
[3]   SuperPoint: Self-Supervised Interest Point Detection and Description [J].
DeTone, Daniel ;
Malisiewicz, Tomasz ;
Rabinovich, Andrew .
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, :337-349
[4]  
Feng MD, 2019, IEEE INT CONF ROBOT, P4790, DOI [10.1109/ICRA.2019.8794415, 10.1109/icra.2019.8794415]
[5]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[6]  
Himanshu J., Autonomous mobile robot market by type, by application, and by end user global opportunity analysis and industry forecast
[7]  
HKUST-Aerial-Robotics, Hkust-aerial-robotics/a-loam: Advanced implementation of loam
[8]   DeepScanner: a Robotic System for Automated 2D Object Dataset Collection with Annotations [J].
Ilin, Valery ;
Kalinov, Ivan ;
Karpyshev, Pavel ;
Tsetserukou, Dzmitry .
2021 26TH IEEE INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2021,
[9]  
Kalinov Ivan, 2021, 2021 International Conference on Unmanned Aircraft Systems (ICUAS), P1653, DOI 10.1109/ICUAS51884.2021.9476826
[10]   WareVR: Virtual Reality Interface for Supervision of Autonomous Robotic System Aimed at Warehouse Stocktaking [J].
Kalinov, Ivan ;
Trinitatova, Daria ;
Tsetserukou, Dzmitry .
2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, :2139-2145