Monocular Visual-Inertial Odometry with Planar Regularities

被引:10
作者
Chen, Chuchu [1 ]
Geneva, Patrick [1 ]
Peng, Yuxiang [1 ]
Lee, Woosik [1 ]
Huang, Guoquan [1 ]
机构
[1] Univ Delaware, Robot Percept & Nav Grp RPNG, Newark, DE 19716 USA
来源
2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA | 2023年
关键词
ALGORITHM; POINT; SLAM;
D O I
10.1109/ICRA48891.2023.10160620
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
State-of-the-art monocular visual-inertial odometry (VIO) approaches rely on sparse point features in part due to their efficiency, robustness, and prevalence, while ignoring high-level structural regularities such as planes that are common to man-made environments and can be exploited to further constrain motion. Generally, planes can be observed by a camera for significant periods of time due to their large spatial presence and thus, are amenable for long-term navigation. Therefore, in this paper, we design a novel real-time monocular VIO system that is fully regularized by planar features within a lightweight multi-state constraint Kalman filter (MSCKF). At the core of our method is an efficient robust monocular-based plane detection algorithm, which does not require additional sensing modalities such as a stereo or depth camera as commonly seen in the literature, while enabling real-time regularization of point features to environmental planes. Specifically, in the proposed MSCKF, long-lived planes are maintained in the state vector, while shorter ones are marginalized after use for efficiency. Planar regularities are applied to both in-state SLAM features and out-of-state MSCKF features, thus fully exploiting the environmental plane information to improve VIO performance. The proposed approach is evaluated with extensive Monte-Carlo simulations and different real-world experiments including an author-collected AR scenario, and shown to outperform the point-based VIO in structured environments. Video Demonstration https://youtu.be/bec7LbYaOS8 AR Table Dataset https://github.com/rpng/ar_table_dataset
引用
收藏
页码:6224 / 6231
页数:8
相关论文
共 76 条
[61]  
Whelan T, 2015, ROBOTICS: SCIENCE AND SYSTEMS XI
[62]   Recovering 3D Planes from a Single Image via Convolutional Neural Networks [J].
Yang, Fengting ;
Zhou, Zihan .
COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 :87-103
[63]   Monocular Object and Plane SLAM in Structured Environments [J].
Yang, Shichao ;
Scherere, Sebastian .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04) :3145-3152
[64]   Decoupled Right Invariant Error States for Consistent Visual-Inertial Navigation [J].
Yang, Yulin ;
Chen, Chuchu ;
Lee, Woosik ;
Huang, Guoquan .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) :1627-1634
[65]  
Yang YL, 2019, IEEE INT CONF ROBOT, P6094, DOI [10.1109/icra.2019.8794078, 10.1109/ICRA.2019.8794078]
[66]  
Yang YL, 2017, IEEE INT C INT ROBOT, P6749, DOI 10.1109/IROS.2017.8206592
[67]  
Yin Wei, 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
[68]   ManhattanSLAM: Robust Planar Tracking and Mapping Leveraging Mixture of Manhattan Frames [J].
Yunus, Raza ;
Li, Yanyan ;
Tombari, Federico .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :6687-6693
[69]  
Zalik B, 2003, INT J GEOGR INF SCI, V17, P119, DOI 10.1080/13658810210157813
[70]   Worm-Like Soft Robot for Complicated Tubular Environments [J].
Zhang, Boyu ;
Fan, Yingwei ;
Yang, Penghui ;
Cao, Tianle ;
Liao, Hongen .
SOFT ROBOTICS, 2019, 6 (03) :399-413