RLP-VIO: Robust and lightweight plane-based visual-inertial odometry for augmented reality

被引:1
作者
Li, Jinyu [1 ]
Zhou, Xin [1 ]
Yang, Bangbang [1 ]
Zhang, Guofeng [1 ]
Wang, Xun [2 ]
Bao, Hujun [1 ]
机构
[1] Zhejiang Univ, State Key Lab CAD & CG, East Bldg 1A-509, Hangzhou 310058, Peoples R China
[2] Zhejiang Gongshang Univ, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
augmented reality; bundle adjustment; plane prior; SLAM; visual-inertial odometry; MONOCULAR SLAM; KALMAN FILTER; PARAMETRIZATION; VERSATILE;
D O I
10.1002/cav.2046
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We propose RLP-VIO-a robust and lightweight monocular visual-inertial odometry system using multiplane priors. With planes extracted from the point cloud, visual-inertial-plane PnP uses the plane information for fast localization. Depth estimation is susceptible to degenerated motion, so the planes are expanded in a reprojection consensus-based way robust to depth errors. For sensor fusion, our sliding-window optimization uses a novel structureless plane-distance error cost, which prevents the fill-in effect that poisons the BA problem's sparsity and permits the use of a smaller sliding window while maintaining good accuracy. The total computational cost is further reduced with our modified marginalization strategy. To further improve the tracking robustness, the landmark depths are constrained using the planes during degenerated motion. The whole system is parallelized with a three-stage pipeline. Under controlled environments, this parallelization runs deterministically and produces consistent results. The resulting VIO system is tested on widely used datasets and compared with several state-of-the-art systems. Our system achieves competitive accuracy and works robustly even on long and challenging sequences. To demonstrate the effectiveness of the proposed system, we also show the AR application running on mobile devices in real-time.
引用
收藏
页数:22
相关论文
共 62 条
[1]   VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems [J].
Bavle, Hriday ;
De La Puente, Paloma ;
How, Jonathan P. ;
Campoy, Pascual .
IEEE ACCESS, 2020, 8 :60704-60718
[2]  
Blanco JL., 2021, ARXIV PREPRINT ARXIV
[3]   Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback [J].
Bloesch, Michael ;
Burri, Michael ;
Omari, Sammy ;
Hutter, Marco ;
Siegwart, Roland .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (10) :1053-1072
[4]   The EuRoC micro aerial vehicle datasets [J].
Burri, Michael ;
Nikolic, Janosch ;
Gohl, Pascal ;
Schneider, Thomas ;
Rehder, Joern ;
Omari, Sammy ;
Achtelik, Markus W. ;
Siegwart, Roland .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2016, 35 (10) :1157-1163
[5]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[6]   Inverse Depth Parametrization for Monocular SLAM [J].
Civera, Javier ;
Davison, Andrew J. ;
Montiel, J. M. Martinez .
IEEE TRANSACTIONS ON ROBOTICS, 2008, 24 (05) :932-945
[7]   ADVIO: An Authentic Dataset for Visual-Inertial Odometry [J].
Cortes, Santiago ;
Solin, Arno ;
Rahtu, Esa ;
Kannala, Juho .
COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 :425-440
[8]  
Davison AJ, 2003, NINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS I AND II, PROCEEDINGS, P1403
[9]  
Dellaert F., 2017, Found. Trends Robot., V6, P1, DOI [0.1561/2300000043, DOI 10.1561/2300000043]
[10]   Direct Sparse Odometry [J].
Engel, Jakob ;
Koltun, Vladlen ;
Cremers, Daniel .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) :611-625