Illumination-Aware Multi-Task GANs for Foreground Segmentation

被引:23
作者
Sakkos, Dimitrios [1 ]
Ho, Edmond S. L. [1 ]
Shum, Hubert P. H. [1 ]
机构
[1] Northumbria Univ, Dept Comp & Informat Sci, Newcastle Upon Tyne NE1 8ST, Tyne & Wear, England
基金
英国工程与自然科学研究理事会;
关键词
Background subtraction; multi-task learning; generative adversarial networks; video segmentation; illumination-aware; BACKGROUND SUBTRACTION; LOW-RANK; TRACKING; ENHANCEMENT;
D O I
10.1109/ACCESS.2019.2891943
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Foreground-background segmentation has been an active research area over the years. However, conventional models fail to produce accurate results when challenged with the videos of challenging illumination conditions. In this paper, we present a robust model that allows accurately extracting the foreground even in exceptionally dark or bright scenes and in continuously varying illumination in a video sequence. This is accomplished by a triple multi-task generative adversarial network (TMT-GAN) that effectively models the semantic relationship between the dark and bright images and performs binary segmentation end-to-end. Our contribution is twofold: first, we show that by jointly optimizing the GAN loss and the segmentation loss, our network simultaneously learns both tasks that mutually benefit each other. Second, fusing features of images with varying illumination into the segmentation branch vastly improve the performance of the network. Comparative evaluations on highly challenging real and synthetic benchmark datasets (ESI and SABS) demonstrate the robustness of TMT-GAN and its superiority over state-of-the-art approaches.
引用
收藏
页码:10976 / 10986
页数:11
相关论文
共 72 条
[1]   Fusion-based foreground enhancement for background subtraction using multivariate multi-model Gaussian distribution [J].
Akilan, Thangarajah ;
Wu, Q. M. Jonathan ;
Yang, Yimin .
INFORMATION SCIENCES, 2018, 430 :414-431
[2]  
[Anonymous], NIGHT TO DAY IMAGE T
[3]  
[Anonymous], P 3 INT C LEARNING R
[4]  
[Anonymous], 2018, LEARNING BASED DEQUA
[5]  
[Anonymous], 2017, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2017.322
[6]  
[Anonymous], 2017, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2017.632
[7]  
[Anonymous], 2015, PROC CVPR IEEE
[8]  
[Anonymous], 2017, NEURIPS
[9]  
[Anonymous], P EUR C COMPUT VIS
[10]  
[Anonymous], 2018, LECT NOTES COMPUT SC, DOI [10.1007/978-3-030-01219-9_11, DOI 10.1007/978-3-030-01219-9_11]