Visual-LiDAR-Inertial Odometry: A New Visual-Inertial SLAM Method based on an iPhone 12 Pro

被引:2
作者
Jin, Lingqiu
Ye, Cang
机构
来源
2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS | 2023年
关键词
NAVIGATION; ROBUST;
D O I
10.1109/IROS55552.2023.10341536
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As today's smartphone integrates various imaging sensors and Inertial Measurement Units (IMU) and becomes computationally powerful, there is a growing interest in developing smartphone-based visual-inertial (VI) SLAM methods for robotics and computer vision applications. In this paper, we introduce a new SLAM method, called Visual-LiDAR-Inertial Odometry (VLIO), based on an iPhone 12 Pro. VLIO formulates device pose estimation as an optimization problem that minimizes a cost function based on the residuals of the inertial, visual, and depth measurements. We present the first work that 1) characterizes the iPhone's LiDAR in depth measurement and identifies the models for the measurement error and standard deviation, and 2) characterizes pose change estimation with LiDAR data. The measurement models are then used to compute the depth-related and visual-feature-related residuals for the cost function. Also, VLIO tracks varying camera intrinsic parameters (CIP) in real-time and uses them in computing these residuals. Both approaches result in more accurate residual terms and thus more accurate pose estimation. The CIP tracking method eliminates the need of a sophisticated model-fitting process that includes camera calibration and paring of the CIPs and IMU measurements with various phone orientations. Experimental results validate the efficacy of VLIO.
引用
收藏
页码:1511 / 1516
页数:6
相关论文
共 38 条
[1]  
[Anonymous], 2013, P IEEE INT C ROB AUT
[2]  
[Anonymous], about us
[3]  
[Anonymous], FAC ID ADV TECHN
[4]  
[Anonymous], 2014, P IEEE INT C ROB AUT
[5]   LEAST-SQUARES FITTING OF 2 3-D POINT SETS [J].
ARUN, KS ;
HUANG, TS ;
BLOSTEIN, SD .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1987, 9 (05) :699-700
[6]  
Delmerico J, 2018, IEEE INT CONF ROBOT, P2502
[7]   Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion [J].
Fang, Wei ;
Zheng, Lianyu ;
Deng, Huanjun ;
Zhang, Hongbo .
SENSORS, 2017, 17 (05)
[8]  
Faragher RM, 2013, I NAVIG SAT DIV INT, P1006
[9]  
Forster C, 2015, ROBOTICS: SCIENCE AND SYSTEMS XI
[10]  
Georg K., 2009, P IEEE INT S MIX AUG