Real-Time Monocular Depth Estimation using Synthetic Data with Domain Adaptation via Image Style Transfer

被引:109
作者
Atapour-Abarghouei, Amir [1 ]
Breckon, Toby P. [1 ,2 ]
机构
[1] Univ Durham, Dept Comp Sci, Durham, England
[2] Univ Durham, Dept Engn, Durham, England
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
关键词
D O I
10.1109/CVPR.2018.00296
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Monocular depth estimation using learning-based approaches has become promising in recent years. However, most monocular depth estimators either need to rely on large quantities of ground truth depth data, which is extremely expensive and difficult to obtain, or predict disparity as an intermediary step using a secondary supervisory signal leading to blurring and other artefacts. Training a depth estimation model using pixel-perfect synthetic data can resolve most of these issues but introduces the problem of domain bias. This is the inability to apply a model trained on synthetic data to real-world scenarios. With advances in image style transfer and its connections with domain adaptation (Maximum Mean Discrepancy), we take advantage of style transfer and adversarial training to predict pixel perfect depth from a single real-world color image based on training over a large corpus of synthetic environment data. Experimental results indicate the efficacy of our approach compared to contemporary state-of-the-art techniques.
引用
收藏
页码:2800 / 2810
页数:11
相关论文
共 91 条
[1]  
Abrams A, 2012, LECT NOTES COMPUT SC, V7573, P357, DOI 10.1007/978-3-642-33709-3_26
[2]  
[Anonymous], 2016, P INT C LEARN REPR
[3]  
[Anonymous], 2016, P IEEE C COMPUTER VI
[4]  
[Anonymous], 2016, Semantic style transfer and turning two-bit doodles into fine artworks
[5]  
[Anonymous], 2016, ARXIV161104076
[6]  
[Anonymous], PROC CVPR IEEE
[7]  
[Anonymous], 2015, ARXIV PREPRINT ARXIV
[8]  
[Anonymous], 2015, arXiv
[9]  
[Anonymous], 2017, ARXIV170606782
[10]  
[Anonymous], 2017, OPEN SOURCE DEV ENV