A Monocular Visual SLAM Algorithm Based on Point-Line Feature

被引:0
|
作者
Wang D. [1 ]
Huang L. [1 ]
Li Y. [1 ]
机构
[1] Department of Electronic Science and Technology, University of Science and Technology of China, Hefei
来源
Jiqiren/Robot | 2019年 / 41卷 / 03期
关键词
Line feature; Mobile robot; Monocular vision; Semi-direct method; Simultaneous localization and mapping (SLAM);
D O I
10.13973/j.cnki.robot.180368
中图分类号
学科分类号
摘要
In the circumstances with blurred images owing to the fast movement of camera or in low-textured scenes, it is difficult for the SLAM (simultaneous localization and mapping) algorithm based on point features to track sufficient effective point features, which leads to low accuracy and robustness, and even causes the system can't work normally. For this problem, a monocular visual SLAM algorithm based on the point and line features and the wheel odometer data is designed. Firstly, the data association accuracy is improved by using the complementation of point-feature and line-feature. Based on this, an environmental-feature map with geometric information is constructed, and meanwhile the wheel odometer data is incorporated to provide prior and scale information for the visual localization algorithm. Then, the more accurate visual pose is estimated by minimizing reprojection errors of points and line segments in the local map. When the visual localization fails, the localization system still works normally with the wheel odometer data. The simulation results on various public datasets show that the proposed algorithm outperforms the multi-state constraint Kalman filter (MSCKF) algorithm and large-scale direct monocular SLAM (LSD-SLAM) algorithm, which demonstrates the correctness and effectiveness of the algorithm. Finally, the algorithms is applied to a self-developed physical robot system. The root mean square error (RMSE) of the monocular visual localization algorithm is about 7 cm, and the processing time is about 90 ms per frame (640×480) on the embedded platform with 4-core processor of 1.2 GHz main frequency. © 2019, Science Press. All right reserved.
引用
收藏
页码:392 / 403
页数:11
相关论文
共 26 条
  • [1] Cadena C., Carlone L., Carrillo H., Et al., Past, present, and future of simultaneous localization and mapping: Toward the robustperception age, IEEE Transactions on Robotics, 32, 6, pp. 1309-1332, (2017)
  • [2] Klein G., Murray D., Parallel tracking and mapping for small AR workspaces, IEEE and ACM International Symposium on Mixed and Augmented Reality, (2008)
  • [3] Mur-Artal R., Montiel J.M.M., Tardos J.D., ORB-SLAM: A versatile and accurate monocular SLAM system, IEEE Transactions on Robotics, 31, 5, pp. 1147-1163, (2015)
  • [4] Engel J., Schops T., Cremers D., LSD-SLAM: Large-scale direct monocular SLAM, European Conference on Computer Vision, pp. 834-849, (2014)
  • [5] Engel J., Koltun V., Cremers D., Direct sparse odometry, IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 3, pp. 611-625, (2018)
  • [6] Forster C., Pizzoli M., Scaramuzza D., SVO: Fast semi-direct monocular visual odometry, IEEE International Conference on Robotics and Automation, pp. 15-22, (2014)
  • [7] Von-Gioi R.G., Jakubowicz J., Morel J.M., Et al., LSD: A fast line segment detector with a false detection control, IEEE Transactions on Pattern Analysis and Machine Intelligence, 32, 4, pp. 722-732, (2010)
  • [8] Li H.F., Hu Z.H., Chen X.W., PLP-SLAM: A visual SLAM method based on point-line-plane feature fusion, Robot, 39, 2, pp. 214-220, (2017)
  • [9] Pumarola A., Vakhitov A., PL-SLAM: Real-time monocular visual SLAM with points and lines, IEEE International Conference on Robotics and Automation, pp. 4503-4508, (2017)
  • [10] Scaramuzza D., Achtelik M.C., Doitsidis L., Et al., Visioncontrolled micro flying robots: From system design to autonomous navigation and mapping in GPS-denied environments, IEEE Robotics & Automation Magazine, 21, 3, pp. 26-40, (2014)