Can Ground Truth Label Propagation from Video Help Semantic Segmentation?

被引:9
作者
Mustikovela, Siva Karthik [1 ]
Yang, Michael Ying [2 ]
Rother, Carsten [1 ]
机构
[1] Tech Univ Dresden, Dresden, Germany
[2] Univ Twente, Enschede, Netherlands
来源
COMPUTER VISION - ECCV 2016 WORKSHOPS, PT III | 2016年 / 9915卷
关键词
D O I
10.1007/978-3-319-49409-8_66
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For state-of-the-art semantic segmentation task, training convolutional neural networks (CNNs) requires dense pixelwise ground truth (GT) labeling, which is expensive and involves extensive human effort. In this work, we study the possibility of using auxiliary ground truth, so-called pseudo ground truth (PGT) to improve the performance. The PGT is obtained by propagating the labels of a GT frame to its subsequent frames in the video using a simple CRF-based, cue integration framework. Our main contribution is to demonstrate the use of noisy PGT along with GT to improve the performance of a CNN. We perform a systematic analysis to find the right kind of PGT that needs to be added along with the GT for training a CNN. In this regard, we explore three aspects of PGT which influence the learning of a CNN: (i) the PGT labeling has to be of good quality; (ii) the PGT images have to be different compared to the GT images; (iii) the PGT has to be trusted differently than GT. We conclude that PGT which is diverse from GT images and has good quality of labeling can indeed help improve the performance of a CNN. Also, when PGT is multiple folds larger than GT, weighing down the trust on PGT helps in improving the accuracy. Finally, We show that using PGT along with GT, the performance of Fully Convolutional Network (FCN) on Camvid data is increased by 2.7% on IoU accuracy. We believe such an approach can be used to train CNNs for semantic video segmentation where sequentially labeled image frames are needed. To this end, we provide recommendations for using PGT strategically for semantic segmentation and hence bypass the need for extensive human efforts in labeling.
引用
收藏
页码:804 / 820
页数:17
相关论文
共 41 条
[1]  
Agrawal P, 2014, LECT NOTES COMPUT SC, V8695, P329, DOI 10.1007/978-3-319-10584-0_22
[2]  
[Anonymous], 2015, ICCV
[3]  
[Anonymous], CVPR
[4]  
[Anonymous], 2009, ICML
[5]  
[Anonymous], 2011, ADV NEURAL INF PROCE
[6]  
[Anonymous], 2015, CVPR
[7]  
[Anonymous], 2015, ICCV
[8]  
[Anonymous], 2013, CVPR
[9]  
[Anonymous], 2011, ICCV
[10]  
[Anonymous], 2015, ICCV