Generative adversarial framework for depth filling via Wasserstein metric, cosine transform and domain transfer

被引:21
作者
Atapour-Abarghouei, Amir [1 ]
Akcay, Samet [1 ]
de La Garanderie, Gregoire Payen [1 ]
Breckon, Toby P. [1 ]
机构
[1] Univ Durham, Dept Comp Sci, Durham, England
关键词
Depth image; Hole filling; Self-psupervised learning; Generative model; Adversarial training; Feature distance; Domain adaptation; CONVOLUTIONAL NEURAL-NETWORKS; SUPERRESOLUTION;
D O I
10.1016/j.patcog.2019.02.010
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, the issue of depth filling is addressed using a self-supervised feature learning model that predicts missing depth pixel values based on the context and structure of the scene. A fully-convolutional generative model is conditioned on the available depth information and full RGB colour information from the scene and trained in an adversarial fashion to complete scene depth. Since ground truth depth is not readily available, synthetic data is instead used with a separate model developed to predict where holes would appear in a sensed (non-synthetic) depth image based on the contents of the RGB image. The resulting synthetic data with realistic holes is utilized in training the depth filling model which makes joint use of a reconstruction loss which employs the Discrete Cosine Transform for more realistic outputs, an adversarial loss which measures the distribution distances via the Wasserstein metric and a bottleneck feature loss that aids in better contextual feature execration. Additionally, the model is adversarially adapted to perform well on naturally-obtained data with no available ground truth. Qualitative and quantitative evaluations demonstrate the efficacy of the approach compared to contemporary depth filling techniques. The strength of the feature learning capabilities of the resulting deep network model is also demonstrated by performing the task of monocular depth estimation using our pre-trained depth hole filling model as the initialization for subsequent transfer learning. (C) 2019 Elsevier Ltd. All rights reserved.
引用
收藏
页码:232 / 244
页数:13
相关论文
共 68 条
[1]  
[Anonymous], 2016, P IEEE C COMPUTER VI
[2]  
[Anonymous], P 3 INT C LEARNING R
[3]  
[Anonymous], 3DTV C TRUE VIS CAPT
[4]  
[Anonymous], PATTERN RECOGNIT
[5]  
[Anonymous], PATTERN RECOGNIT LET
[6]  
[Anonymous], 2015, ARXIV PREPRINT ARXIV
[7]  
[Anonymous], COMPUT SCI ENG SURV
[8]  
[Anonymous], OPEN SOURCE DEV ENV
[9]  
[Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.470
[10]  
[Anonymous], INT CONF COMPUTER VI