Accurate and robust visual SLAM with a novel ray-to-ray line measurement model

被引:8
作者
Zhang, Chengran [1 ]
Fang, Zheng [1 ,2 ]
Luo, Xingjian [2 ]
Liu, Wei [1 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, 3-11, Wenhua Rd, Shenyang 110819, Liaoning, Peoples R China
[2] Northeastern Univ, Fac Robot Sci & Engn, 195 Chuangxin Rd, Shenyang 110169, Liaoning, Peoples R China
基金
中国国家自然科学基金;
关键词
Point and line feature; Simultaneous localization and mapping; Visual inertial odometry; Optimization; STRUCTURE-FROM-MOTION; STRUCTURAL REGULARITY; SEGMENT DETECTOR; CORRESPONDENCES; DESCRIPTOR; VERSATILE; POINTS; CAMERA;
D O I
10.1016/j.imavis.2023.104837
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Line feature is regarded as a more intuitive and accurate landmark than point feature in visual SLAM for its multiple-pixel comprehensiveness. However, uncertain factors, such as partial occlusion and noise, frequently hinder the mapping accuracy and destabilize the line-assisted SLAM system. Structural regulations and prior hypotheses are often used to tackle the issues, whereas only few people explore the impact of line feature optimization. In this paper, we attempt to improve the accuracy and robustness of visual SLAM system through line feature optimization process. First, a concise ray-to-ray residual model is proposed to replace the prevalent point-to-line model to integrally use line features. Second, the information matrix related to observation uncertainties is calculated to normalize the residual model, which aims to better balance the weights of different lines. Third, we add the line model to ORB-SLAM3 system and design the method of point-and-line based tracking and optimization. Finally, quantitative criteria are proposed to objectively evaluate the line feature map. Both synthetical and real datasets experiments are carried out to demonstrate the advantages of our algorithm in terms of camera ego-motion estimation and mapping. For camera ego-motion estimation experiments, the proposed ray-to-ray residual model produces more accurate results compared to state-of-the-art line-assisted SLAM/VIO algorithms. Furthermore, the model runs faster and obtains more robust results than the prevalent point-to-line reprojection residual model. For mapping experiments, quantitative criteria are proposed, which also open a new perspective to evaluate line-assisted SLAM systems, and give clues to evidence that the proposed method builds a more accurate line feature map.
引用
收藏
页数:13
相关论文
共 41 条
[1]   EDLines: A real-time line segment detector with a false detection control [J].
Akinlar, Cuneyt ;
Topal, Cihan .
PATTERN RECOGNITION LETTERS, 2011, 32 (13) :1633-1642
[2]   Structure-from-motion using lines: Representation, triangulation, and bundle adjustment [J].
Bartoli, A ;
Sturm, P .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2005, 100 (03) :416-441
[3]   The EuRoC micro aerial vehicle datasets [J].
Burri, Michael ;
Nikolic, Janosch ;
Gohl, Pascal ;
Schneider, Thomas ;
Rehder, Joern ;
Omari, Sammy ;
Achtelik, Markus W. ;
Siegwart, Roland .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2016, 35 (10) :1157-1163
[4]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[5]  
Fu Q., arXiv
[6]   PL-SLAM: A Stereo SLAM System Through the Combination of Points and Line Segments [J].
Gomez-Ojeda, Ruben ;
Moreno, Francisco-Angel ;
Zuniga-Noel, David ;
Scaramuzza, Davide ;
Gonzalez-Jimenez, Javier .
IEEE TRANSACTIONS ON ROBOTICS, 2019, 35 (03) :734-746
[7]  
Grupp M., 2017, EVO
[8]   Park marking-based vehicle self-localization with a fisheye topview system [J].
Houben, Sebastian ;
Neuhausen, Marcel ;
Michael, Matthias ;
Kesten, Robert ;
Mickler, Florian ;
Schuller, Florian .
JOURNAL OF REAL-TIME IMAGE PROCESSING, 2019, 16 (02) :289-304
[9]   TP-LSD: Tri-Points Based Line Segment Detector [J].
Huang, Siyu ;
Qin, Fangbo ;
Xiong, Pengfei ;
Ding, Ning ;
He, Yijia ;
Liu, Xiao .
COMPUTER VISION - ECCV 2020, PT XXVII, 2020, 12372 :770-785
[10]  
Li H, 2019, IEEE INT C INT ROBOT, P6914, DOI [10.1109/iros40897.2019.8968444, 10.1109/IROS40897.2019.8968444]