Vehicle Simultaneous Localization and Mapping Algorithm with Lidar-camera Fusion

被引:0
|
作者
Yu Z.-J. [1 ,2 ]
Zhang C.-G. [2 ]
Guo B.-Q. [1 ,2 ]
机构
[1] College of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing
[2] Frontiers Science Center for Smart High-speed Railway System, Beijing Jiaotong University, Beijing
基金
中国国家自然科学基金;
关键词
Intelligent transportation; Lidar-camera Fusion SLAM; Multi-sensor fusion; Vehicle self-positioning;
D O I
10.16097/j.cnki.1009-6744.2021.04.009
中图分类号
学科分类号
摘要
Localization and mapping is the basis of autonomous vehicle driving in unknown environment. Since the lidar technique heavily relies on geometric features of the scene and the visual images are vulnerable to light interference, the SLAM algorithms only rely on laser point cloud or visual images also show limitations on vehicle localization and mapping. In this paper, a vehicle self-positioning algorithm based on laser and vision fusion SLAM is proposed to improve the performance of localization by combining the complementary advantages. To give full play to the advantages of multi-source features, the laser point cloud is used to obtain the depth information of visual features at the front end of the algorithm. The laser-visual features are input into the pose estimation module in a loose-coupled way to improve the robustness of the algorithm. To solve the problem of large-scale optimization of the back-end pose and feature points, this study proposes two critical strategies to reduce the computation amount of the algorithm. The balanced selection strategy is based on key frame and sliding window and the classification optimization strategy is based on feature points and pose. Experimental results show that the average positioning relative error of the proposed algorithm is 0.11 m and 0.002 rad, and the average resource utilization rate is 22.18% (CPU) and 21.5% (memory). Compared with the traditional A-LOAM and ORB-SLAM2 algorithms, the proposed algorithm has a good performance on both accuracy and robustness. Copyright © 2021 by Science Press.
引用
收藏
页码:72 / 81
页数:9
相关论文
共 12 条
  • [1] HE Z Y, XU N., Automatic train operation algorithm based on adaptive iterative learning control theory, Journal of Transportation Systems Engineering and Information Technology, 20, 2, pp. 69-75, (2020)
  • [2] LI W C, HU Z Z, HU Y Z, Et al., Accurate localization based on GPS and image fusion for intelligent vehicles, Journal of Transportation Systems Engineering and Information Technology, 17, 3, pp. 112-119, (2017)
  • [3] WANG X L, HU Z Z, LI W C, Et al., High accuracy vehicle localization by referring to pavement fingerprint, Journal of Transportation Systems Engineering and Information Technology, 18, 4, pp. 38-45, (2018)
  • [4] ZHANG J, SINGH S., LOAM: Lidar odometry and mapping in real-time, Science and Systems, 15, 5, pp. 9-25, (2014)
  • [5] JI X, ZUO L, ZHANG C, Et al., LLOAM: lidAR odometry and mapping with loop-closure detection based correction, 2019 IEEE International Conference on Mechatronics and Automation (ICMA), (2019)
  • [6] SHAN T, ENGLOT B., LeGO-LOAM: Lightweight and ground-optimized lidar odometry and mapping on variable terrain, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2018)
  • [7] RAUL M, JUAN D T., ORB-SLAM2: An open-source SLAM system for Monocular, Stereo and RGB-D cameras, IEEE Transactions on Robotics, 5, 33, pp. 1255-1262, (2016)
  • [8] ENGEL J, STUCKLER J, CREMERS D., Large-scale direct SLAM with stereo cameras, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2015)
  • [9] FORSTER C, PIZZOLI M, SCARAMUZZA D., SVO: Fast semi-direct monocular visual odometry, 2014 IEEE International Conference on Robotics and Automation (ICRA), (2014)
  • [10] ZHANG J, SINGH S., Visual-lidar odometry and mapping: Low-drift, robust, and fast, 2015 IEEE International Conference on Robotics and Automation (ICRA), (2015)