Robust visual multitask tracking via composite sparse model

被引:2
|
作者
Jin, Bo [1 ]
Jing, Zhongliang [1 ]
Wang, Meng [2 ]
Pan, Han [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Aeronaut & Astronaut, Shanghai 200240, Peoples R China
[2] Chinese Acad Sci, Shanghai Inst Tech Phys, Shanghai 200083, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
visual tracking; sparse representation; multitask learning; dirty model; alternating direction method of multipliers; OBJECT TRACKING;
D O I
10.1117/1.JEI.23.6.063022
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recently, multitask learning was applied to visual tracking by learning sparse particle representations in a joint task, which led to the so-called multitask tracking algorithm (MTT). Although MTT shows impressive tracking performances by mining the interdependencies between particles, the individual feature of each particle is underestimated. The utilized L-1,L-q norm regularization assumes all features are shared between all particles and results in nearly identical representation coefficients in nonsparse rows. We propose a composite sparse multitask tracking algorithm (CSMTT). We develop a composite sparse model to formulate the object appearance as a combination of the shared feature component, the individual feature component, and the outlier component. The composite sparsity is achieved via the L-1,L-infinity and L-1,L-1 norm minimization, and is optimized by the alternating direction method of multipliers, which provides a favorable reconstruction performance and an impressive computational efficiency. Moreover, a dynamical dictionary updating scheme is proposed to capture appearance changes. CSMTT is tested on real-world video sequences under various challenges, and experimental results show that the composite sparse model achieves noticeable lower reconstruction errors and higher computational speeds than traditional sparse models, and CSMTT has consistently better tracking performances against seven state-of-the-art trackers. (C) 2014 SPIE and IS&T
引用
收藏
页数:15
相关论文
共 50 条
  • [41] Robust visual tracking with discriminative sparse learning
    Lu, Xiaoqiang
    Yuan, Yuan
    Yan, Pingkun
    PATTERN RECOGNITION, 2013, 46 (07) : 1762 - 1771
  • [42] ROBUST OBJECT TRACKING VIA INCREMENTAL SUBSPACE DYNAMIC SPARSE MODEL
    Ji, Zhangjian
    Wang, Weiqiang
    Xu, Ning
    2014 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2014,
  • [43] Robust Visual Tracking Using an Effective Appearance Model Based on Sparse Coding
    Zhang, Shengping
    Yao, Hongxun
    Sun, Xin
    Liu, Shaohui
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2012, 3 (03)
  • [44] Canonical Correlation Analysis Based Sparse Representation Model for Robust Visual Tracking
    Kang Bin
    Cao Wenwen
    Yan Jun
    Zhang Suofei
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2018, 40 (07) : 1619 - 1626
  • [45] Visual Tracking via Adaptive Structural Local Sparse Appearance Model
    Jia, Xu
    Lu, Huchuan
    Yang, Ming-Hsuan
    2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2012, : 1822 - 1829
  • [46] Visual tracking via saliency weighted sparse coding appearance model
    Li, Wanyi
    Wang, Peng
    Qiao, Hong
    2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, : 4092 - 4097
  • [47] Robust visual tracking via part-based model
    Yong Wang
    Xinbin Luo
    Lu Ding
    Shan Fu
    Huanlong Zhang
    Multimedia Systems, 2020, 26 : 607 - 620
  • [48] Robust visual tracking via part-based model
    Wang, Yong
    Luo, Xinbin
    Ding, Lu
    Fu, Shan
    Zhang, Huanlong
    MULTIMEDIA SYSTEMS, 2020, 26 (05) : 607 - 620
  • [49] Robust Visual Tracking via Collaborative Motion and Appearance Model
    Tu, Fangwen
    Ge, Shuzhi Sam
    Tang, Yazhe
    Hang, Chang Chieh
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2017, 13 (05) : 2251 - 2259
  • [50] Visual tracking via salient feature extraction and sparse collaborative model
    Liu, Yang
    Yang, Feng
    Zhong, Cheng
    Tao, Ying
    Dai, Bing
    Yin, Mengxiao
    AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS, 2018, 87 : 134 - 143