Online supervised attention-based recurrent depth estimation from monocular video

被引:0
作者
Maslov D. [1 ]
Makarov I. [1 ,2 ]
机构
[1] School of Data Analysis and Artificial Intelligence, HSE University, Moscow
[2] Samsung-PDMI Joint AI Center, St. Petersburg Department of Steklov Institute of Mathematics, St. Petersburg
关键词
Augmented Reality; Autonomous Vehicles; Computer Science Methods; Computer Vision; Deep Convolutional Neural Networks; Depth Reconstruction; Recurrent Neural Networks;
D O I
10.7717/PEERJ-CS.317
中图分类号
学科分类号
摘要
Autonomous driving highly depends on depth information for safe driving. Recently, major improvements have been taken towards improving both supervised and self-supervised methods for depth reconstruction. However, most of the current approaches focus on single frame depth estimation, where quality limit is hard to beat due to limitations of supervised learning of deep neural networks in general. One of the way to improve quality of existing methods is to utilize temporal information from frame sequences. In this paper, we study intelligent ways of integrating recurrent block in common supervised depth estimation pipeline. We propose a novel method, which takes advantage of the convolutional gated recurrent unit (convGRU) and convolutional long short-term memory (convLSTM). We compare use of convGRU and convLSTM blocks and determine the best model for real-time depth estimation task. We carefully study training strategy and provide new deep neural networks architectures for the task of depth estimation from monocular video using information from past frames based on attention mechanism. We demonstrate the efficiency of exploiting temporal information by comparing our best recurrent method with existing image-based and video-based solutions for monocular depth reconstruction. © 2020. Maslov and Makarov. All Rights Reserved.
引用
收藏
页码:1 / 22
页数:21
相关论文
共 61 条
[41]  
Ronneberger O, Fischer P, Brox T., U-net: convolutional networks for biomedical image segmentation, (2015)
[42]  
Saxena A, Sun M, Ng AY., Learning 3-D scene structure from a single still image, 2007 IEEE 11th international conference on computer vision, pp. 1-8, (2007)
[43]  
Siam M, Valipour S, Jagersand M, Ray N., Convolutional gated recurrent networks for video segmentation, 2017 IEEE international conference on image processing (ICIP), pp. 3090-3094, (2017)
[44]  
Sun L, Jia K, Yeung D, Shi BE., Human action recognition using factorized spatiotemporal convolutional networks, 2015 IEEE international conference on computer vision (ICCV), pp. 4597-4605, (2015)
[45]  
Uhrig J, Schneider N, Schneider L, Franke U, Brox T, Geiger A., Sparsity invariant CNNs, 2017 international conference on 3D vision (3DV), pp. 11-20, (2017)
[46]  
Vaishakh P, Van Gansbeke W, Dai D, Van Gool L., Dont forget the past: recurrent depth estimation from monocular video, (2020)
[47]  
Van Gansbeke W, Neven D, De Brabandere B, Van Gool L., Sparse and noisy LiDAR completion with RGB guidance and uncertainty, 2019 16th international conference on machine vision applications (MVA), pp. 1-6, (2019)
[48]  
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I., Attention is all you need, (2017)
[49]  
Wang A, Fang Z, Gao Y, Jiang X, Ma S., Depth estimation of video sequences with perceptual losses, IEEE Access, 6, pp. 30536-30546, (2018)
[50]  
Wang C, Miguel Buenaposada J, Zhu R, Lucey S., Learning depth from monocular videos using direct methods, The IEEE conference on computer vision and pattern recognition (CVPR), (2018)