MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera Visual-Inertial Navigation System

被引:64
作者
Eckenhoff, Kevin [1 ]
Geneva, Patrick [1 ]
Huang, Guoquan [1 ]
机构
[1] Univ Delaware, Robot Percept & Nav Grp, Newark, DE 19716 USA
关键词
Sensors; Cameras; Calibration; Sensor systems; Sensor fusion; Estimation; Visualization; Estimation consistency; estimation resilience; Kalman filtering; multisensor fusion; sensor calibration; state estimation; visual-inertial systems; KALMAN FILTER; OBSERVABILITY ANALYSIS; CALIBRATION; CONSISTENCY; PREINTEGRATION; ENVIRONMENTS; IMPROVEMENT; MOTION; SLAM; EKF;
D O I
10.1109/TRO.2021.3049445
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
As cameras and inertial sensors are becoming ubiquitous in mobile devices and robots, it holds great potential to design visual-inertial navigation systems (VINS) for efficient versatile 3-D motion tracking, which utilize any (multiple) available cameras and inertial measurement units (IMUs) and are resilient to sensor failures or measurement depletion. To this end, rather than the standard VINS paradigm using a minimal sensing suite of a single camera and IMU, in this article, we design a real-time consistent multi-IMU multi-camera (MIMC) VINS estimator that is able to seamlessly fuse multimodal information from an arbitrary number of uncalibrated cameras and IMUs. Within an efficient multi-state constraint Kalman filter framework, the proposed MIMC-VINS algorithm optimally fuses asynchronous measurements from all sensors while providing smooth, uninterrupted, and accurate 3-D motion tracking even if some sensors fail. The key idea of the proposed MIMC-VINS is to perform high-order on-manifold state interpolation to efficiently process all available visual measurements without increasing the computational burden due to estimating additional sensors' poses at asynchronous imaging times. In order to fuse the information from multiple IMUs, we propagate a joint system consisting of all IMU states while enforcing rigid-body constraints between the IMUs during the filter update stage. Finally, we estimate online both spatiotemporal extrinsic and visual intrinsic parameters to make our system robust to errors in prior sensor calibration. The proposed system is extensively validated in both Monte Carlo simulations and real-world experiments.
引用
收藏
页码:1360 / 1380
页数:21
相关论文
共 67 条
[1]  
[Anonymous], 2016, P INT WORKSH ALG FDN
[2]  
[Anonymous], 2014, Robotics: Science and Systems
[3]  
[Anonymous], 2005, tech. rep.
[4]  
Avram Remus C., 2015, IFAC - Papers Online, V48, P380, DOI 10.1016/j.ifacol.2015.09.556
[5]   Lucas-Kanade 20 years on: A unifying framework [J].
Baker, S ;
Matthews, I .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2004, 56 (03) :221-255
[6]  
Bancroft J.B., 2009, ION GNSS 2009, P1
[7]   Data Fusion Algorithms for Multiple Inertial Measurement Units [J].
Bancroft, Jared B. ;
Lachapelle, Gerard .
SENSORS, 2011, 11 (07) :6771-6798
[8]  
Chatfield AB, 1997, Fundamentals of high accuracy inertial navigation
[9]  
Chirikjian GS, 2012, APPL NUMER HARMON AN, P1, DOI 10.1007/978-0-8176-4944-9
[10]  
Eckenhoff K, 2019, IEEE INT CONF ROBOT, P3158, DOI [10.1109/ICRA.2019.8793886, 10.1109/icra.2019.8793886]