Convolutional Recurrent Predictor: Implicit Representation for Multi-Target Filtering and Tracking

被引:9
作者
Emambakhsh, Mehryar [1 ,2 ]
Bay, Alessandro [1 ]
Vazquez, Eduard [3 ]
机构
[1] Cortexica Vis Syst Ltd, London, England
[2] Mesirow Financial, London, England
[3] AnyVision, Belfast, Antrim, North Ireland
关键词
Multi-target filtering and tracking; random finite sets; convolutional recurrent neural networks; long-short term memory; spatio-temporal data;
D O I
10.1109/TSP.2019.2931170
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Defining a multi-target motion model, an important step of tracking algorithms, is a challenging task due to various factors, from its theoretical formulation to its computational complexity. Using fixed models (as in several generative Bayesian algorithms, such as Kalman filters) can fail to accurately predict sophisticated target motions. On the other hand, sequential learning of the motion model (for example, using recurrent neural networks) can be computationally complex and difficult due to the variable unknown number of targets. In this paper, we propose a multi-target filtering and tracking algorithm which learns the motion model, simultaneously for all targets. It does so from an implicitly represented state map and performing spatio-temporal data prediction. To this end, the multi-target state is modeled over a continuous hypothetical target space, using random finite sets and Gaussian mixture probability hypothesis density formulations. The prediction step is recursively performed using a deep convolutional recurrent neural network with a long short-term memory architecture, which is trained as a regression block, on the fly, over probability density difference maps. Our approach is evaluated over widely used pedestrian tracking benchmarks, remarkably outperforming state-of-the-art multi-target filtering algorithms, while giving competitive results when compared with other tracking approaches: The proposed approach generates an average 40.40 and 62.29 optimal sub-pattern assignment errors on MOT15 and MOT16/17 datasets, respectively, while producing 62.0%, 70.0%, and 66.9% multi-object tracking accuracy on MOT16/17, PNNL Parking Lot, and PETS09 pedestrian tracking datasets, respectively, when publicly available detectors are used.
引用
收藏
页码:4545 / 4555
页数:11
相关论文
共 28 条
[1]   An Efficient Implementation of the Generalized Labeled Multi-Bernoulli Filter [J].
Ba-Ngu Vo ;
Ba-Tuong Vo ;
Hung Gia Hoang .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2017, 65 (08) :1975-1987
[2]   Labeled Random Finite Sets and the Bayes Multi-Target Tracking Filter [J].
Ba-Ngu Vo ;
Ba-Tuong Vo ;
Dinh Phung .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2014, 62 (24) :6554-6567
[3]  
Bay A, 2016, INT CONF COMM, P89, DOI 10.1109/ICComm.2016.7528305
[4]  
Bewley A, 2016, IEEE IMAGE PROC, P3464, DOI 10.1109/ICIP.2016.7533003
[5]   Fast Feature Pyramids for Object Detection [J].
Dollar, Piotr ;
Appel, Ron ;
Belongie, Serge ;
Perona, Pietro .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (08) :1532-1545
[6]   Deep Recurrent Neural Network for Multi-target Filtering [J].
Emambakhsh, Mehryar ;
Bay, Alessandro ;
Vazquez, Eduard .
MULTIMEDIA MODELING, MMM 2019, PT II, 2019, 11296 :519-531
[7]   INTEGRATED REGION-BASED SEGMENTATION USING COLOR COMPONENTS AND TEXTURE FEATURES WITH PRIOR SHAPE KNOWLEDGE [J].
Emambakhsh, Mehryar ;
Ebrahimnezhad, Hossein ;
Sedaaghi, Mohammad Hossein .
INTERNATIONAL JOURNAL OF APPLIED MATHEMATICS AND COMPUTER SCIENCE, 2010, 20 (04) :711-726
[8]   Object Detection with Discriminatively Trained Part-Based Models [J].
Felzenszwalb, Pedro F. ;
Girshick, Ross B. ;
McAllester, David ;
Ramanan, Deva .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (09) :1627-1645
[9]   Re3: Real-Time Recurrent Regression Networks for Visual Tracking of Generic Objects [J].
Gordon, Daniel ;
Farhadi, Ali ;
Fox, Dieter .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (02) :788-795
[10]  
Graves A, 2012, STUD COMPUT INTELL, V385, P1, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]