Discriminative Object Tracking via Sparse Representation and Online Dictionary Learning

被引:90
|
作者
Xie, Yuan [1 ]
Zhang, Wensheng [1 ]
Li, Cuihua [2 ]
Lin, Shuyang [2 ]
Qu, Yanyun [2 ]
Zhang, Yinghua [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Complex Syst & Intelligence Sci, Beijing 100190, Peoples R China
[2] Xiamen Univ, Dept Comp Sci, Video & Image Lab, Xiamen 361005, Peoples R China
基金
高等学校博士学科点专项科研基金; 中国国家自然科学基金;
关键词
Dictionary learning; object tracking; robust keypoints matching; sparse representation; VISUAL TRACKING; ROBUST; SELECTION;
D O I
10.1109/TCYB.2013.2259230
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.
引用
收藏
页码:539 / 553
页数:15
相关论文
共 50 条
  • [1] Discriminative sparse representation and online dictionary learning for target tracking
    HuangYue
    Li, Peng
    14TH INTERNATIONAL SYMPOSIUM ON DISTRIBUTED COMPUTING AND APPLICATIONS FOR BUSINESS, ENGINEERING AND SCIENCE (DCABES 2015), 2015, : 324 - 327
  • [2] Visual Tracking via Sparse Representation and Online Dictionary Learning
    Cheng, Xu
    Li, Nijun
    Zhou, Tongchi
    Zhou, Lin
    Wu, Zhenyang
    ACTIVITY MONITORING BY MULTIPLE DISTRIBUTED SENSING, 2014, 8703 : 87 - 103
  • [3] Visual tracking via sparse representation and online dictionary learning
    Wu, Zhenyang, 1600, Springer Verlag (8703):
  • [4] Discriminative Sparse Representation for Online Visual Object Tracking
    Bai, Tianxiang
    Li, Y. F.
    Zhou, Xiaolong
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO 2012), 2012,
  • [5] Online discriminative dictionary learning for robust object tracking
    Zhou, Tao
    Liu, Fanghui
    Bhaskar, Harish
    Yang, Jie
    Zhang, Huanlong
    Cai, Ping
    NEUROCOMPUTING, 2018, 275 : 1801 - 1812
  • [6] Object tracking with soft discriminative sparse dictionary
    Zha, Yi
    Cao, Tieyong
    Huang, Hui
    Song, Zhijun
    You, Jun
    Zha, Yi, 1600, Institute of Computing Technology (26): : 1279 - 1289
  • [7] ONLINE DISCRIMINATIVE DICTIONARY LEARNING VIA LABEL INFORMATION FOR MULTI TASK OBJECT TRACKING
    Fan, Baojie
    Du, Yingkui
    Gao, Hao
    Wang, Baoyun
    2014 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2014,
  • [8] Online Learning Discriminative Dictionary with Label Information for Robust Object Tracking
    Fan, Baojie
    Du, Yingkui
    Cong, Yang
    ABSTRACT AND APPLIED ANALYSIS, 2014,
  • [9] Learning Discriminative Dictionary for Group Sparse Representation
    Sun, Yubao
    Liu, Qingshan
    Tang, Jinhui
    Tao, Dacheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (09) : 3816 - 3828
  • [10] Online Visual Object Tracking with Supervised Sparse Representation and Learning
    Bai, Tianxiang
    Li, Y. F.
    Shao, Zhanpeng
    2014 13TH INTERNATIONAL CONFERENCE ON CONTROL AUTOMATION ROBOTICS & VISION (ICARCV), 2014, : 827 - 832