A SLAM System Based on RGBD Image and Point-Line Feature

被引:14
作者
Li, Dan [1 ]
Liu, Shuang [1 ]
Xiang, Weilai [1 ]
Tan, Qiwei [1 ]
Yuan, Kaicheng [1 ]
Zhang, Zhen [1 ]
Hu, Yingsong [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
基金
中国国家自然科学基金;
关键词
SLAM; point-line fusion; Plucker coordinates; tracking; back-end optimization; LOCALIZATION;
D O I
10.1109/ACCESS.2021.3049467
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Most of the existing visual SLAM schemes rely solely on point or line features to estimate camera trajectory. In some scenes such as texture missing or motion blurring, it's difficult to find a sufficient number of reliable features, resulting in low positioning accuracy. To extract more features, an RPL-SLAM solution is proposed to extract point features and line features respectively. Ulteriorly, the depth information of RGBD image is used to restore the 3D information of point and line features, improving the accuracy of camera track positioning. RPL-SLAM scheme mainly includes three modules: tracking, local mapping and loop detection. Tracking module extends the application of line feature on the basis of point feature extraction and matching. A SLD line segment extraction algorithm which can eliminate the micro segments and a DBM segment matching algorithm based on word bag are proposed respectively. These two algorithms improve the matching efficiency while ensuring the matching accuracy, and effectively track and locate the camera of each frame. In the local mapping and loop detection module, the Plucker coordinates are applied to express the spatial line and define the re-projection error of the straight line, so that the back-end optimization error model of point-line fusion is unified to solve the instability problem in optimization. RPL-SLAM is tested on TUM RGBD and ICL-NUIM data set respectively, and compared with ORB-SLAM2. The result shows that RPL-SLAM can effectively improve the accuracy of pose estimation and map reconstruction while maintaining real-time performance by fusing point-line features with depth images.
引用
收藏
页码:9012 / 9025
页数:14
相关论文
共 41 条
[1]   EDLines: A real-time line segment detector with a false detection control [J].
Akinlar, Cuneyt ;
Topal, Cihan .
PATTERN RECOGNITION LETTERS, 2011, 32 (13) :1633-1642
[2]  
[Anonymous], 2006, P BRIT MACH VIS C
[3]  
[Anonymous], 1990, EXPT ROBOTICS
[4]  
Bill T., 1999, P INT WORKSH VIS ALG, P298
[5]   MonoSLAM: Real-time single camera SLAM [J].
Davison, Andrew J. ;
Reid, Ian D. ;
Molton, Nicholas D. ;
Stasse, Olivier .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2007, 29 (06) :1052-1067
[6]   Simultaneous localization and mapping: Part I [J].
Durrant-Whyte, Hugh ;
Bailey, Tim .
IEEE ROBOTICS & AUTOMATION MAGAZINE, 2006, 13 (02) :99-108
[7]   Edge landmarks in monocular SLAM [J].
Eade, Ethan ;
Drummond, Tom .
IMAGE AND VISION COMPUTING, 2009, 27 (05) :588-596
[8]   Direct Sparse Odometry [J].
Engel, Jakob ;
Koltun, Vladlen ;
Cremers, Daniel .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) :611-625
[9]   LSD-SLAM: Large-Scale Direct Monocular SLAM [J].
Engel, Jakob ;
Schoeps, Thomas ;
Cremers, Daniel .
COMPUTER VISION - ECCV 2014, PT II, 2014, 8690 :834-849
[10]   SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems [J].
Forster, Christian ;
Zhang, Zichao ;
Gassner, Michael ;
Werlberger, Manuel ;
Scaramuzza, Davide .
IEEE TRANSACTIONS ON ROBOTICS, 2017, 33 (02) :249-265