Foreground Segmentation in Depth Imagery Using Depth and Spatial Dynamic Models for Video Surveillance Applications

被引:11
作者
del-Blanco, Carlos R. [1 ]
Mantecon, Tomas [1 ]
Camplani, Massimo [1 ]
Jaureguizar, Fernando [1 ]
Salgado, Luis [1 ,2 ]
Garcia, Narciso [1 ]
机构
[1] Univ Politecn Madrid, ETSI Telecomunicac, Grp Tratamiento Imagenes, E-28040 Madrid, Spain
[2] Univ Autonoma Madrid, Video Proc & Understanding Lab, E-28049 Madrid, Spain
关键词
depth sensors; foreground segmentation; video surveillance; Bayesian network; BACKGROUND SUBTRACTION; COLOR; IDENTIFICATION;
D O I
10.3390/s140201961
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Low-cost systems that can obtain a high-quality foreground segmentation almost independently of the existing illumination conditions for indoor environments are very desirable, especially for security and surveillance applications. In this paper, a novel foreground segmentation algorithm that uses only a Kinect depth sensor is proposed to satisfy the aforementioned system characteristics. This is achieved by combining a mixture of Gaussians-based background subtraction algorithm with a new Bayesian network that robustly predicts the foreground/background regions between consecutive time steps. The Bayesian network explicitly exploits the intrinsic characteristics of the depth data by means of two dynamic models that estimate the spatial and depth evolution of the foreground/background regions. The most remarkable contribution is the depth-based dynamic model that predicts the changes in the foreground depth distribution between consecutive time steps. This is a key difference with regard to visible imagery, where the color/gray distribution of the foreground is typically assumed to be constant. Experiments carried out on two different depth-based databases demonstrate that the proposed combination of algorithms is able to obtain a more accurate segmentation of the foreground/background than other state-of-the art approaches.
引用
收藏
页码:1961 / 1987
页数:27
相关论文
共 36 条
[1]   Who is who at different cameras: people re-identification using depth cameras [J].
Albiol, A. ;
Albiol, A. ;
Oliver, J. ;
Mossi, J. M. .
IET COMPUTER VISION, 2012, 6 (05) :378-387
[2]  
[Anonymous], IEEE T CYBE IN PRESS
[3]   A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking [J].
Arulampalam, MS ;
Maskell, S ;
Gordon, N ;
Clapp, T .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2002, 50 (02) :174-188
[4]  
Bandouch J., 2008, Proc. 19th British Machine Vision Conference (BMVC), P1
[5]   ViBe: A Universal Background Subtraction Algorithm for Video Sequences [J].
Barnich, Olivier ;
Van Droogenbroeck, Marc .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2011, 20 (06) :1709-1724
[6]  
Bellotti C, 2004, 2004 IEEE INTELLIGENT VEHICLES SYMPOSIUM, P686
[7]   Depth-Color Fusion Strategy for 3-D Scene Modeling With Kinect [J].
Camplani, Massimo ;
Mantecon, Tomas ;
Salgado, Luis .
IEEE TRANSACTIONS ON CYBERNETICS, 2013, 43 (06) :1560-1571
[8]   Background foreground segmentation with RGB-D Kinect data: An efficient combination of classifiers [J].
Camplani, Massimo ;
Salgado, Luis .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2014, 25 (01) :122-136
[9]   Spatial-Bag-of-Features [J].
Cao, Yang ;
Wang, Changhu ;
Li, Zhiwei ;
Zhang, Liqing ;
Zhang, Lei .
2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, :3352-3359
[10]   A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems [J].
Chen, Jing ;
Wang, Yongtian ;
Wu, Hanxiao .
SENSORS, 2012, 12 (11) :14397-14415