Online supervised attention-based recurrent depth estimation from monocular video

被引:0
作者
Maslov D. [1 ]
Makarov I. [1 ,2 ]
机构
[1] School of Data Analysis and Artificial Intelligence, HSE University, Moscow
[2] Samsung-PDMI Joint AI Center, St. Petersburg Department of Steklov Institute of Mathematics, St. Petersburg
来源
Maslov, Dmitrii (dvmaslov@edu.hse.ru) | 1600年 / PeerJ Inc.卷 / 06期
关键词
Augmented Reality; Autonomous Vehicles; Computer Science Methods; Computer Vision; Deep Convolutional Neural Networks; Depth Reconstruction; Recurrent Neural Networks;
D O I
10.7717/PEERJ-CS.317
中图分类号
学科分类号
摘要
Autonomous driving highly depends on depth information for safe driving. Recently, major improvements have been taken towards improving both supervised and self-supervised methods for depth reconstruction. However, most of the current approaches focus on single frame depth estimation, where quality limit is hard to beat due to limitations of supervised learning of deep neural networks in general. One of the way to improve quality of existing methods is to utilize temporal information from frame sequences. In this paper, we study intelligent ways of integrating recurrent block in common supervised depth estimation pipeline. We propose a novel method, which takes advantage of the convolutional gated recurrent unit (convGRU) and convolutional long short-term memory (convLSTM). We compare use of convGRU and convLSTM blocks and determine the best model for real-time depth estimation task. We carefully study training strategy and provide new deep neural networks architectures for the task of depth estimation from monocular video using information from past frames based on attention mechanism. We demonstrate the efficiency of exploiting temporal information by comparing our best recurrent method with existing image-based and video-based solutions for monocular depth reconstruction. © 2020. Maslov and Makarov. All Rights Reserved.
引用
收藏
页码:1 / 22
页数:21
相关论文
共 61 条
  • [1] Bahdanau D, Cho K, Bengio Y., Neural machine translation by jointly learning to align and translate, (2014)
  • [2] Ballas N, Yao L, Pal C, Courville A., Delving deeper into convolutional networks for learning video representations, (2015)
  • [3] Cao Y, Wu Z, Shen C., Estimating depth from monocular images as classification using deep fully convolutional residual networks, IEEE Transactions on Circuits and Systems for Video Technology, 28, 11, pp. 3174-3182, (2018)
  • [4] Casser V, Pirk S, Mahjourian R, Angelova A., Depth Prediction without the sensors: leveraging structure for unsupervised learning from monocular videos, (2018)
  • [5] Chen Y, Zhao H, Hu Z., Attention-based context aggregation network for monocular depth estimation, (2019)
  • [6] Chung J, Gulcehre C, Cho K, Bengio Y., Empirical evaluation of gated recurrent neural networks on sequence modeling, (2014)
  • [7] Diba A, Sharma V, Gool LV, Stiefelhagen R., DynamoNet: dynamic action and motion network, The IEEE international conference on computer vision (ICCV), (2019)
  • [8] Eigen D, Fergus R., Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture, 2015 IEEE international conference on computer vision (ICCV), pp. 2650-2658, (2015)
  • [9] Eigen D, Puhrsch C, Fergus R., Depth map prediction from a single image using a multi-scale deep network, (2014)
  • [10] Fu H, Gong M, Wang C, Batmanghelich K, Tao D., Deep ordinal regression network for monocular depth estimation, IEEE conference on computer vision and pattern recognition (CVPR), (2018)