Leveraging Deep Learning for Visual Odometry Using Optical Flow

被引:17
作者
Pandey, Tejas [1 ]
Pena, Dexmont [1 ]
Byrne, Jonathan [1 ]
Moloney, David [1 ]
机构
[1] Intel Res & Dev, Leixlip W23 CX68, Ireland
关键词
visual odometry; ego-motion estimation; deep learning;
D O I
10.3390/s21041313
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In this paper, we study deep learning approaches for monocular visual odometry (VO). Deep learning solutions have shown to be effective in VO applications, replacing the need for highly engineered steps, such as feature extraction and outlier rejection in a traditional pipeline. We propose a new architecture combining ego-motion estimation and sequence-based learning using deep neural networks. We estimate camera motion from optical flow using Convolutional Neural Networks (CNNs) and model the motion dynamics using Recurrent Neural Networks (RNNs). The network outputs the relative 6-DOF camera poses for a sequence, and implicitly learns the absolute scale without the need for camera intrinsics. The entire trajectory is then integrated without any post-calibration. We evaluate the proposed method on the KITTI dataset and compare it with traditional and other deep learning approaches in the literature.
引用
收藏
页码:1 / 13
页数:13
相关论文
共 43 条
  • [1] [Anonymous], 2015, C COMPUTER VISION PA
  • [2] [Anonymous], 2017, ARXIV171206080
  • [3] [Anonymous], 2017, ARXIV170404861V1
  • [4] [Anonymous], 2015, CRITICAL REV RECURRE
  • [5] [Anonymous], DLSS 2.0-Image Reconstruction for Real-Time Rendering with Deep Learning
  • [6] [Anonymous], 2018, ARXIV180206857
  • [7] PERFORMANCE OF OPTICAL-FLOW TECHNIQUES
    BARRON, JL
    FLEET, DJ
    BEAUCHEMIN, SS
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 1994, 12 (01) : 43 - 77
  • [8] Bloesch M, 2015, IEEE INT C INT ROBOT, P298, DOI 10.1109/IROS.2015.7353389
  • [9] Cheng Y, 2005, IEEE SYS MAN CYBERN, P903
  • [10] LS-VO: Learning Dense Optical Subspace for Robust Visual Odometry Estimation
    Costante, Gabriele
    Ciarfuglia, Thomas Alessandro
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (03): : 1735 - 1742