EPLF-VINS: Real-Time Monocular Visual-Inertial SLAM With Efficient Point-Line Flow Features

被引:33
作者
Xu, Lei [1 ]
Yin, Hesheng [1 ]
Shi, Tong [1 ]
Jiang, Di [1 ]
Huang, Bo [1 ]
机构
[1] Harbin Inst Technol, Ind Res Inst Robot & Intelligent Equipment, Weihai 264209, Peoples R China
关键词
Feature extraction; Simultaneous localization and mapping; Feature detection; Real-time systems; Optical flow; Location awareness; Optimization; Simultaneous localization and mapping (SLAM); line segment extraction and matching; monocular visual-inertial SLAM; localization; SEGMENT EXTRACTION; ROBUST; DESCRIPTOR; DETECTOR;
D O I
10.1109/LRA.2022.3231983
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This letter introduces an efficient visual-inertial simultaneous localization and mapping (SLAM) method using point and line features. Currently, point-based SLAM methods do not perform well in scenarios such as weak textures and motion blur. Many researchers have noticed the excellent properties of line features in space and have attempted to develop line-based SLAM systems. However, the vast computational effort of the line extraction and description matching process makes it challenging to guarantee the real-time performance of the whole SLAM system, and the incorrect line detection and matching limit the performance improvement of the SLAM system. In this letter, we improve the traditional line detection model by means of short-line fusion, line feature uniform distribution, and adaptive threshold extraction to obtain high-quality line features for constructing SLAM constraints. Based on the gray level invariance assumption and colinear constraint, we propose a line optical flow tracking method, which significantly improves the speed of line feature matching. In addition, a measurement model that is independent of line endpoints is presented for estimating line residuals. The experimental results show that our algorithm improves the efficiency of line feature detection and matching and localization accuracy.
引用
收藏
页码:752 / 759
页数:8
相关论文
共 38 条
[1]  
Agarwal S., 2022, Ceres Solver
[2]   EDLines: A real-time line segment detector with a false detection control [J].
Akinlar, Cuneyt ;
Topal, Cihan .
PATTERN RECOGNITION LETTERS, 2011, 32 (13) :1633-1642
[3]   BUILDING, REGISTRATING, AND FUSING NOISY VISUAL MAPS [J].
AYACHE, N ;
FAUGERAS, OD .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 1988, 7 (06) :45-65
[4]   Structure-from-motion using lines: Representation, triangulation, and bundle adjustment [J].
Bartoli, A ;
Sturm, P .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2005, 100 (03) :416-441
[5]  
Bouguet J.-Y., 2001, Intel corporation, V5, P1, DOI DOI 10.1109/ICETET.2009.154
[6]   EXTRACTING STRAIGHT-LINES [J].
BURNS, JB ;
HANSON, AR ;
RISEMAN, EM .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1986, 8 (04) :425-455
[7]   The EuRoC micro aerial vehicle datasets [J].
Burri, Michael ;
Nikolic, Janosch ;
Gohl, Pascal ;
Schneider, Thomas ;
Rehder, Joern ;
Omari, Sammy ;
Achtelik, Markus W. ;
Siegwart, Roland .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2016, 35 (10) :1157-1163
[8]  
Dong RF, 2017, 2017 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), P494, DOI 10.1109/MFI.2017.8170369
[9]  
Fu Q., 2020, arXiv
[10]   Accurate and robust line segment extraction by analyzing distribution around peaks in Hough space [J].
Furukawa, Y ;
Shinagawa, Y .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2003, 92 (01) :1-25