Visual-inertial localization method in low-light scene based on improved image enhancement

被引:0
|
作者
Li L. [1 ]
Zhong A. [1 ]
Liang L. [1 ]
Lyu C. [1 ]
Zuo T. [1 ]
Tian X. [2 ]
机构
[1] School of Automation, Beijing Institute of Technology, Beijing
[2] Beijing Institute of Automatic Control Equipment, Beijing
来源
Zhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology | 2023年 / 31卷 / 08期
关键词
image enhancement; localization; low-light scene; visual-inertial odometry;
D O I
10.13695/j.cnki.12-1222/o3.2023.08.006
中图分类号
学科分类号
摘要
In order to improve the localization accuracy of the visual-inertial navigation system in the low -light scene, a visual-inertial localization algorithm combined with image enhancement technology is proposed. The camera response model is determined according to the histograms of different exposure images, and the model parameters are determined by curve fitting. The illumination map and exposure matrix of low-light images are determined by nonlinear optimization, and the low-light images are preprocessed according to the camera response model. The optical flow method is used for feature tracking, and the visual error, inertial measurement unit (IMU) error and prior error are used as constraints to construct a tightly-coupled optimization model, so as to achieve more accurate pose estimation. Finally, the method is evaluated using real data collected by on-board equipment. The experimental results show that the proposed method can effectively improve the localization accuracy of the visual-inertial navigation system in the low-light scene. Compared with the method without image enhancement, the localization accuracy increased by 25.59%. Compared with the method before improvement, the localization accuracy increased by 6.38%. © 2023 Editorial Department of Journal of Chinese Inertial Technology. All rights reserved.
引用
收藏
页码:783 / 789
页数:6
相关论文
共 15 条
  • [1] Qin T, Li P, Shen S., VINS-Mono: A robust and versatile monocular visual-inertial state estimator, IEEE Transactions on Robotics, 34, 4, pp. 1004-1020, (2018)
  • [2] Sun K, Mohta K, Pfrommer B, Et al., Robust stereo visual inertial odometry for fast autonomous flight, IEEE Robotics and Automation Letters, 3, 2, pp. 965-972, (2018)
  • [3] Stumberg L V, Usenko V, Cremers D., Direct sparse visual-inertial odometry using dynamic marginalization, 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2510-2517, (2018)
  • [4] Zou D, Wu Y, Pei L, Et al., StructVIO: Visual-inertial odometry with structural regularity of man-made environments, IEEE Transactions on Robotics, 35, 4, pp. 999-1013, (2019)
  • [5] Campos C, Elvira R, Rodriguez J J G, Et al., ORB-SLAM3: An accurate open-source library for visual, visual–inertial, and multimap SLAM, IEEE Transactions on Robotics, 37, 6, pp. 1874-1890, (2021)
  • [6] Borges P V K, Vidas S., Practical infrared visual odometry, IEEE Transactions on Intelligent Transportation Systems, 17, 8, pp. 2205-2213, (2016)
  • [7] Hao L, Li H, Zhang Q, Et al., LMVI-SLAM: Robust low-light monocular visual-inertial simultaneous localization and mapping, 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 272-277, (2019)
  • [8] Savinykh A, Kurenkov M, Kruzhkov E, Et al., DarkSLAM: GAN-assisted visual SLAM for reliable operation in low-light conditions, 2022 IEEE 95th Vehicular Technology Conference (VTC2022-Spring), pp. 1-6, (2022)
  • [9] Zhang S, Zhi Y, Lu S, Et al., Monocular vision SLAM research for parking environment with low light, International Journal of Automotive Technology, 23, pp. 693-703, (2022)
  • [10] Ying Z, Li G, Ren Y, Et al., A new low-light image enhancement algorithm using camera response model, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 3015-3022, (2017)