Moving object detection method with motion regions tracking in background subtraction

被引:9
作者
Delibasoglu, Ibrahim [1 ]
机构
[1] Sakarya Univ, Software Engn, TR-54050 Sakarya, Turkiye
关键词
Background subtraction; Moving object detection; Motion detection; Foreground segmentation; Surveillance; MODEL;
D O I
10.1007/s11760-022-02458-y
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Identifying any moving object is essential for wide-area surveillance systems and security applications. In this paper, we present a moving object detection method based on background modeling and subtraction. Background modeling-based methods describe a model with features such as color and textures to represent the background. Background subtraction is challenging due to complex background types in natural environments. Many methods suffer from numerous false detections in real applications. In this study, we create a background model with each pixel's age, mean, and variance. Our main contribution is to propose a tracking approach in background subtraction and use simple frame difference to set weight during background subtraction operation. The proposed tracking strategy aims to use spatio-temporal features in foreground mask decision. The tracking method is used as a verification mechanism for candidate moving object regions. Tracking approach is also applied for frame difference, and the generated output motion mask is used to support background model subtraction, especially for slow-moving object cases which cause failure in our background model. The main novelty of the paper is that it proposes a reasonable solution for false detection issue due to homography error without adding a heavy computational cost. We measure each module's performance to demonstrate the impact of each module on the proposed method clearly. Experimental results are examined on two publicly available aerial image datasets, PESMOD and VIVID. The proposed method runs in real time and outperforms existing background modeling-based methods. It is seen that the proposed method achieves a significant reduction in false positives and has stable performance on different kinds of images.
引用
收藏
页码:2415 / 2423
页数:9
相关论文
共 41 条
[21]  
Lim L.A., 2018, ARXIV
[22]   An Empirical Review of Deep Learning Frameworks for Change Detection: Model Design, Experimental Frameworks, Challenges and Research Needs [J].
Mandal, Murari ;
Vipparthi, Santosh Kumar .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) :6101-6122
[23]   Using histograms to detect and track objects in color video [J].
Mason, M ;
Duric, Z .
30TH APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP, PROCEEDINGS: ANALYSIS AND UNDERSTANDING OF TIME VARYING IMAGERY, 2001, :154-159
[24]   Motion U-Net: Multi-cue Encoder-Decoder Network for Motion Segmentation [J].
Rahmon, Gani ;
Bunyak, Filiz ;
Seetharaman, Guna ;
Palaniappan, Kannappan .
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, :8125-8132
[25]   Pyramid Dilated Deeper ConvLSTM for Video Salient Object Detection [J].
Song, Hongmei ;
Wang, Wenguan ;
Zhao, Sanyuan ;
Shen, Jianbing ;
Lam, Kin-Man .
COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 :744-760
[26]   SuBSENSE: A Universal Change Detection Method With Local Adaptive Sensitivity [J].
St-Charles, Pierre-Luc ;
Bilodeau, Guillaume-Alexandre ;
Bergevin, Robert .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (01) :359-373
[27]   Learning patterns of activity using real-time tracking [J].
Stauffer, C ;
Grimson, WEL .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2000, 22 (08) :747-757
[28]  
Stauffer C., 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), P246, DOI 10.1109/CVPR.1999.784637
[29]   BSUV-Net 2.0: Spatio-Temporal Data Augmentations for Video-Agnostic Supervised Background Subtraction [J].
Tezcan, M. Ozan ;
Ishwar, Prakash ;
Konrad, Janusz .
IEEE ACCESS, 2021, 9 :53849-53860
[30]   SHAPE AND MOTION FROM IMAGE STREAMS UNDER ORTHOGRAPHY - A FACTORIZATION METHOD [J].
TOMASI, C ;
KANADE, T .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 1992, 9 (02) :137-154