Learning a Proposal Classifier for Multiple Object Tracking

被引:80
|
作者
Dai, Peng [1 ]
Weng, Renliang [2 ]
Choi, Wongun [2 ]
Zhang, Changshui [1 ]
He, Zhangping [2 ]
Ding, Wei [1 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Aibee Inc, Beijing, Peoples R China
关键词
D O I
10.1109/CVPR46437.2021.00247
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The recent trend in multiple object tracking (MOT) is heading towards leveraging deep learning to boost the tracking performance. However, it is not trivial to solve the data-association problem in an end-to-end fashion. In this paper, we propose a novel proposal-based learnable framework, which models MOT as a proposal generation, proposal scoring and trajectory inference paradigm on an affinity graph. This framework is similar to the two-stage object detector Faster RCNN, and can solve the MOT problem in a data-driven way. For proposal generation, we propose an iterative graph clustering method to reduce the computational cost while maintaining the quality of the generated proposals. For proposal scoring, we deploy a trainable graph-convolutional-network (GCN) to learn the structural patterns of the generated proposals and rank them according to the estimated quality scores. For trajectory inference, a simple deoverlapping strategy is adopted to generate tracking output while complying with the constraints that no detection can be assigned to more than one track. We experimentally demonstrate that the proposed method achieves a clear performance improvement in both MOTA and IDF1 with respect to previous state-of-the-art on two public benchmarks.
引用
收藏
页码:2443 / 2452
页数:10
相关论文
共 50 条
  • [41] INTERACTIVE LEARNING OF A MULTIPLE-ATTRIBUTE HASH TABLE CLASSIFIER FOR FAST OBJECT RECOGNITION
    GREWE, L
    KAK, AC
    COMPUTER VISION AND IMAGE UNDERSTANDING, 1995, 61 (03) : 387 - 416
  • [42] Learning task-specific discriminative representations for multiple object tracking
    Wu, Han
    Nie, Jiahao
    Zhu, Ziming
    He, Zhiwei
    Gao, Mingyu
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (10): : 7761 - 7777
  • [43] Multiple object tracking using incremental learning for appearance model adaptation
    Pernkopf, Franz
    VISAPP 2008: PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 2, 2008, : 463 - 468
  • [44] Object tracking via Online Multiple Instance Learning with reliable components
    Wu, Feng
    Peng, Shaowu
    Zhou, Jingkai
    Liu, Qiong
    Xie, Xiaojia
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2018, 172 : 25 - 36
  • [45] Dependent Dirichlet Process Modeling and Identity Learning for Multiple Object Tracking
    Moraffah, Bahman
    Papandreou-Suppappola, Antonia
    2018 CONFERENCE RECORD OF 52ND ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2018, : 1762 - 1766
  • [46] Interactive teaching learning based optimization technique for multiple object tracking
    Dash, Prajna Parimita
    Mishra, Sudhansu Kumar
    Senapati, Kishore Kumar
    Panda, Ganapati
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (07) : 10577 - 10600
  • [47] SMILEtrack: SiMIlarity LEarning for Occlusion-Aware Multiple Object Tracking
    Wang, Yu-Hsiang
    Hsieh, Jun-Wei
    Chen, Ping-Yang
    Chang, Ming-Ching
    So, Hung-Hin
    Li, Xin
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5740 - 5748
  • [48] Interactive teaching learning based optimization technique for multiple object tracking
    Prajna Parimita Dash
    Sudhansu Kumar Mishra
    Kishore Kumar Senapati
    Ganapati Panda
    Multimedia Tools and Applications, 2021, 80 : 10577 - 10600
  • [49] Visual object tracking based on objectness measure with multiple instance learning
    Hua W.
    Mu D.
    Guo D.
    Liu H.
    Mu, Dejun (mudejun@nwpu.edu.cn), 1600, Beijing University of Aeronautics and Astronautics (BUAA) (43): : 1364 - 1372
  • [50] CALTracker: Cross-Task Association Learning for Multiple Object Tracking
    Liu, Jialin
    Kong, Jun
    Jiang, Min
    Liu, Tianshan
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1622 - 1626