Temporal dynamic appearance modeling for online multi-person tracking

被引:47
作者
Yang, Min [1 ]
Jia, Yunde [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci, Beijing Lab Intelligent Informat Technol, Beijing 100081, Peoples R China
基金
高等学校博士学科点专项科研基金;
关键词
Online multi-person tracking; Appearance modeling; Temporal dynamic; Feature selection; Incremental learning; HIDDEN MARKOV-MODELS; MULTIOBJECT TRACKING; MULTITARGET TRACKING; ASSOCIATION; MULTIPLE; CONFIDENCE; PEOPLE;
D O I
10.1016/j.cviu.2016.05.003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robust online multi-person tracking requires the correct associations of online detection responses with existing trajectories. We address this problem by developing a novel appearance modeling approach to provide accurate appearance affinities to guide data association. In contrast to most existing algorithms that only consider the spatial structure of human appearances, we exploit the temporal dynamic characteristics within temporal appearance sequences to discriminate different persons. The temporal dynamic makes a sufficient complement to the spatial structure of varying appearances in the feature space, which significantly improves the affinity measurement between trajectories and detections. We propose a feature selection algorithm to describe the appearance variations with mid-level semantic features, and demonstrate its usefulness in terms of temporal dynamic appearance modeling. Moreover, the appearance model is learned incrementally by alternatively evaluating newly-observed appearances and adjusting the model parameters to be suitable for online tracking. Reliable tracking of multiple persons in complex scenes is achieved by incorporating the learned model into an online tracking-by-detection framework. Our experiments on the challenging benchmark MOTChallenge 2015 [L. Leal-Taixe, A. Milan, I. Reid, S. Roth, K. Schindler, MOTChallenge 2015: Towards a benchmark for multi-target tracking, arXiv preprint arXiv:1504.01942.] demonstrate that our method outperforms the state-of-the-art multi-person tracking algorithms. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:16 / 28
页数:13
相关论文
共 50 条
[31]   Multi-Target Tracking by Online Learning a CRF Model of Appearance and Motion Patterns [J].
Yang, Bo ;
Nevatia, Ramakant .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2014, 107 (02) :203-217
[32]   USING PANORAMIC VIDEOS FOR MULTI-PERSON LOCALIZATION AND TRACKING IN A 3D PANORAMIC COORDINATE [J].
Yang, Fan ;
Li, Feiran ;
Wu, Yang ;
Sakti, Sakriani ;
Nakamura, Satoshi .
2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, :1863-1867
[33]   Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking [J].
Bae, Seung-Hwan ;
Yoon, Kuk-Jin .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) :595-610
[35]   Inter-Person Occlusion Handling with Social Interaction for Online Multi-Pedestrian Tracking [J].
Li, Yuke ;
Shen, Weiming .
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2016, E99D (12) :3165-3171
[36]   Fast online Multi-Pedestrian Tracking via Integrating Motion Model and Deep Appearance Model [J].
He, Miao ;
Luo, Haibo ;
Hui, Bin ;
Chang, Zheng .
IEEE ACCESS, 2019, 7 :89475-89486
[37]   Online multi-object tracking by detection based on generative appearance models [J].
Riahi, Dorra ;
Bilodeau, Guillaume-Alexandre .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2016, 152 :88-102
[38]   Online multi-object tracking using multi-function integration and tracking simulation training [J].
Yang, Jieming ;
Ge, Hongwei ;
Yang, Jinlong ;
Tong, Yubing ;
Su, Shuzhi .
APPLIED INTELLIGENCE, 2022, 52 (02) :1268-1288
[39]   Multi-Target Tracking by Online Learning a CRF Model of Appearance and Motion Patterns [J].
Bo Yang ;
Ramakant Nevatia .
International Journal of Computer Vision, 2014, 107 :203-217
[40]   Online multi-object tracking: multiple instance based target appearance model [J].
Badal, Tapas ;
Nain, Neeta ;
Ahmed, Mushtaq .
MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (19) :25199-25221