BSUV-Net 2.0: Spatio-Temporal Data Augmentations for Video-Agnostic Supervised Background Subtraction

被引:69
作者
Tezcan, M. Ozan [1 ]
Ishwar, Prakash [1 ]
Konrad, Janusz [1 ]
机构
[1] Boston Univ, Dept Elect & Comp Engn, Boston, MA 02215 USA
关键词
Background subtraction; foreground detection; scene independent; scene agnostic; deep learning; data augmentation; NETWORK; IMAGE;
D O I
10.1109/ACCESS.2021.3071163
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Background subtraction (BGS) is a fundamental video processing task which is a key component of many applications. Deep learning-based supervised algorithms achieve very good performance in BGS, however, most of these algorithms are optimized for either a specific video or a group of videos, and their performance decreases dramatically when applied to unseen videos. Recently, several papers addressed this problem and proposed video-agnostic supervised BGS algorithms. However, nearly all of the data augmentations used in these algorithms are limited to the spatial domain and do not account for temporal variations that naturally occur in video data. In this work, we introduce spatio-temporal data augmentations and apply them to one of the leading video-agnostic BGS algorithms, BSUV-Net. We also introduce a new cross-validation training and evaluation strategy for the CDNet-2014 dataset that makes it possible to fairly and easily compare the performance of various video-agnostic supervised BGS algorithms. Our new model trained using the proposed data augmentations, named BSUV-Net 2.0, significantly outperforms state-of-the-art algorithms evaluated on unseen videos of CDNet-2014. We also evaluate the cross-dataset generalization capacity of BSUV-Net 2.0 by training it solely on CDNet-2014 videos and evaluating its performance on LASIESTA dataset. Overall, BSUV-Net 2.0 provides a similar to 5% improvement in the F-score over state-of-the-art methods on unseen videos of CDNet-2014 and LASIESTA datasets. Furthermore, we develop a real-time variant of our model, that we call Fast BSUV-Net 2.0, whose performance is close to the state of the art.
引用
收藏
页码:53849 / 53860
页数:12
相关论文
共 45 条
[1]  
[Anonymous], 2012, PROC IEEE C COMPUTER
[2]  
[Anonymous], 2017, MULTIMED TOOLS APPL
[3]   A deep convolutional neural network for video sequence background subtraction [J].
Babaee, Mohammadreza ;
Duc Tung Dinh ;
Rigoll, Gerhard .
PATTERN RECOGNITION, 2018, 76 :635-649
[4]  
Bakkay MC, 2018, IEEE IMAGE PROC, P4018, DOI 10.1109/ICIP.2018.8451603
[5]   Real-time nonparametric background subtraction with tracking-based foreground update [J].
Berjon, Daniel ;
Cuevas, Carlos ;
Moran, Francisco ;
Garcia, Narciso .
PATTERN RECOGNITION, 2018, 74 :156-170
[6]   How Far Can You Get by Combining Change Detection Algorithms? [J].
Bianco, Simone ;
Ciocca, Gianluigi ;
Schettini, Raimondo .
IMAGE ANALYSIS AND PROCESSING,(ICIAP 2017), PT I, 2017, 10484 :96-107
[7]   Deep neural network concepts for background subtraction: A systematic review and comparative evaluation [J].
Bouwmans, Thierry ;
Jayed, Sajid ;
Sultana, Maryam ;
Jung, Soon Ki .
NEURAL NETWORKS, 2019, 117 :8-66
[8]  
Braham M, 2017, IEEE IMAGE PROC, P4552, DOI 10.1109/ICIP.2017.8297144
[9]  
Caelles Sergi, 2019, arXiv:1905.00737
[10]  
Chen LB, 2017, IEEE INT SYMP NANO, P1, DOI 10.1109/NANOARCH.2017.8053709