A Monocular Visual Odometry Method Based on Virtual-Real Hybrid Map in Low-Texture Outdoor Environment

被引:5
|
作者
Xie, Xiuchuan [1 ]
Yang, Tao [1 ]
Ning, Yajia [1 ]
Zhang, Fangbing [1 ]
Zhang, Yanning [1 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Natl Engn Lab Integrated AeroSp Ground Ocean Big, Xian 710072, Peoples R China
基金
中国国家自然科学基金;
关键词
visual odometry; simultaneous localization and mapping; low-texture environment; line segments; SLAM;
D O I
10.3390/s21103394
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
With the extensive application of robots, such as unmanned aerial vehicle (UAV) in exploring unknown environments, visual odometry (VO) algorithms have played an increasingly important role. The environments are diverse, not always textured, or low-textured with insufficient features, making them challenging for mainstream VO. However, for low-texture environment, due to the structural characteristics of man-made scene, the lines are usually abundant. In this paper, we propose a virtual-real hybrid map based monocular visual odometry algorithm. The core idea is that we reprocess line segment features to generate the virtual intersection matching points, which can be used to build the virtual map. Introducing virtual map can improve the stability of the visual odometry algorithm in low-texture environment. Specifically, we first combine unparallel matched line segments to generate virtual intersection matching points, then, based on the virtual intersection matching points, we triangulate to get a virtual map, combined with the real map built upon the ordinary point features to form a virtual-real hybrid 3D map. Finally, using the hybrid map, the continuous camera pose estimation can be solved. Extensive experimental results have demonstrated the robustness and effectiveness of the proposed method in various low-texture scenes.
引用
收藏
页数:20
相关论文
共 1 条
  • [1] Plane based Visual Odometry for Structural and Low-texture Environments Using RGB-D Sensors
    Guo, Ruibin
    Zhou, Dongxiang
    Peng, Keju
    Liu, Yunhui
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2019, : 351 - 354