DeepSLAM: A Robust Monocular SLAM System With Unsupervised Deep Learning

被引:71
|
作者
Li, Ruihao [1 ,2 ]
Wang, Sen [3 ]
Gu, Dongbing [4 ]
机构
[1] Natl Innovat Inst Def Technol, Artificial Intelligence Res Ctr, Beijing 100166, Peoples R China
[2] Tianjin Artificial Intelligence Innovat Ctr, Tianjin 300457, Peoples R China
[3] Heriot Watt Univ, Edinburgh Ctr Robot, Edinburgh EH14 4AS, Midlothian, Scotland
[4] Univ Essex, Sch Comp Sci & Elect Engn, Colchester CO4 3SQ, Essex, England
基金
英国工程与自然科学研究理事会; 中国国家自然科学基金; 欧盟地平线“2020”;
关键词
Simultaneous localization and mapping; Visualization; Training; Three-dimensional displays; Optimization; Pose estimation; Depth estimation; machine learning; recurrent convolutional neural network (RCNN); simultaneous localization and mapping (SLAM); unsupervised deep learning (DL);
D O I
10.1109/TIE.2020.2982096
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we propose DeepSLAM, a novel unsupervised deep learning based visual simultaneous localization and mapping (SLAM) system. The DeepSLAM training is fully unsupervised since it only requires stereo imagery instead of annotating ground-truth poses. Its testing takes a monocular image sequence as the input. Therefore, it is a monocular SLAM paradigm. DeepSLAM consists of several essential components, including Mapping-Net, Tracking-Net, Loop-Net, and a graph optimization unit. Specifically, the Mapping-Net is an encoder and decoder architecture for describing the 3-D structure of environment, whereas the Tracking-Net is a recurrent convolutional neural network architecture for capturing the camera motion. The Loop-Net is a pretrained binary classifier for detecting loop closures. DeepSLAM can simultaneously generate pose estimate, depth map, and outlier rejection mask. In this article, we evaluate its performance on various datasets, and find that DeepSLAM achieves good performance in terms of pose estimation accuracy, and is robust in some challenging scenes.
引用
收藏
页码:3577 / 3587
页数:11
相关论文
共 50 条
  • [11] UVS: underwater visual SLAM—a robust monocular visual SLAM system for lifelong underwater operations
    Marco Leonardi
    Annette Stahl
    Edmund Førland Brekke
    Martin Ludvigsen
    Autonomous Robots, 2023, 47 : 1367 - 1385
  • [12] Using Unsupervised Deep Learning Technique for Monocular Visual Odometry
    Liu, Qiang
    Li, Ruihao
    Hu, Huosheng
    Gu, Dongbing
    IEEE ACCESS, 2019, 7 : 18076 - 18088
  • [13] UnDeepVO: Monocular Visual Odometry through Unsupervised Deep Learning
    Li, Ruihao
    Wang, Sen
    Long, Zhiqiang
    Gu, Dongbing
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 7286 - 7291
  • [14] EGO-SLAM: A Robust Monocular SLAM for Egocentric Videos
    Patra, Suvam
    Gupta, Kartikeya
    Ahmad, Faran
    Arora, Chetan
    Banerjee, Subhashis
    2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 31 - 40
  • [15] Combining Deep Learning and RGBD SLAM for Monocular Indoor Autonomous Flight
    Martinez-Carranza, J.
    Rojas-Perez, L. O.
    Cabrera-Ponce, A. A.
    Munguia-Silva, R.
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, MICAI 2018, PT II, 2018, 11289 : 356 - 367
  • [16] Loop closure detection using supervised and unsupervised deep neural networks for monocular SLAM systems
    Memon, Azam Rafique
    Wang, Hesheng
    Hussain, Abid
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2020, 126
  • [17] HFNet-SLAM: An Accurate and Real-Time Monocular SLAM System with Deep Features
    Liu, Liming
    Aitken, Jonathan M.
    SENSORS, 2023, 23 (04)
  • [18] Robust monocular SLAM towards motion disturbance
    Wei Liu
    Nanning Zheng
    Zejian Yuan
    Pengju Ren
    Tao Wang
    Chinese Science Bulletin, 2014, 59 (17) : 2050 - 2056
  • [19] Robust monocular SLAM towards motion disturbance
    Liu, Wei
    Zheng, Nanning
    Yuan, Zejian
    Ren, Pengju
    Wang, Tao
    CHINESE SCIENCE BULLETIN, 2014, 59 (17): : 2050 - 2056
  • [20] Robust Large Scale Monocular Visual SLAM
    Bourmaud, Guillaume
    Megret, Remi
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 1638 - 1647