A visual SLAM method assisted by IMU and deep learning in indoor dynamic blurred scenes

被引:6
作者
Liu, Fengyu [1 ]
Cao, Yi [1 ]
Cheng, Xianghong [1 ]
Liu, Luhui [1 ]
机构
[1] Southeast Univ, Sch Instrument Sci & Technol, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
dynamic blurred scenes; visual SLAM; image blur; IMU assistance; deep learning; feature point velocity; ROBUST;
D O I
10.1088/1361-6501/ad03b9
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Dynamic targets in the environment can seriously affect the accuracy of simultaneous localization and mapping (SLAM) systems. This article proposes a novel dynamic visual SLAM method with inertial measurement unit (IMU) and deep learning for indoor dynamic blurred scenes, which improves the front end of ORB-SLAM2, combining deep learning with geometric constraint to make the dynamic feature points elimination more reasonable and robust. First, a multi-directional superposition blur augmentation algorithm is added to the YOLOv5s network to compensate for errors caused by fast-moving targets, camera shake and camera focus. Then, the fine-tuned YOLOv5s model is used to detect potential dynamic regions. Afterward, IMU measurements are introduced for rotation compensation to calculate the feature point velocity and estimate the motion speed of the camera, in order to estimate the real motion state of potential dynamic targets. Finally, real dynamic points will be removed and potential dynamic points will be reserved for subsequent pose estimation. Experiments are conducted on Technische Universitat Munchen dynamic dataset and in the real world. The results demonstrate that the proposed method has significant improvement compared with ORB-SLAM2, and has a more robust performance over some other state-of-the-art dynamic visual SLAM systems.
引用
收藏
页数:14
相关论文
共 34 条
[1]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[2]   A Real-Time Dynamic Object Segmentation Framework for SLAM System in Dynamic Scenes [J].
Chang, Jianfang ;
Dong, Na ;
Li, Donghui .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[3]  
Choi S., 1997, J COMPUTER VISION, V24, P271, DOI [10.5244/C.23.81, DOI 10.5244/C.23.81]
[4]   Background Foreground Segmentation for SLAM [J].
Corcoran, Padraig ;
Winstanley, Adam ;
Mooney, Peter ;
Middleton, Rick .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2011, 12 (04) :1177-1183
[5]   RGB-D SLAM in Dynamic Environments Using Point Correlations [J].
Dai, Weichen ;
Zhang, Yu ;
Li, Ping ;
Fang, Zheng ;
Scherer, Sebastian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) :373-389
[6]   Accurate Dynamic SLAM Using CRF-Based Long-Term Consistency [J].
Du, Zheng-Jun ;
Huang, Shi-Sheng ;
Mu, Tai-Jiang ;
Zhao, Qunhe ;
Martin, Ralph R. ;
Xu, Kun .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (04) :1745-1757
[7]   Blitz-SLAM: A semantic SLAM in dynamic environments [J].
Fan, Yingchun ;
Zhang, Qichi ;
Tang, Yuliang ;
Liu, Shaofen ;
Han, Hong .
PATTERN RECOGNITION, 2022, 121
[8]   Fast R-CNN [J].
Girshick, Ross .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1440-1448
[9]   An Adaptive Visual Dynamic-SLAM Method Based on Fusing the Semantic Information [J].
Jiao, Jichao ;
Wang, Chenxu ;
Li, Ning ;
Deng, Zhongliang ;
Xu, Wei .
IEEE SENSORS JOURNAL, 2022, 22 (18) :17414-17420
[10]  
Jocher G, 2020, Yolov5 by Ultralytics: Version 7.0