LEARNING A TEMPORALLY INVARIANT REPRESENTATION FOR VISUAL TRACKING

被引:0
|
作者
Ma, Chao [1 ,2 ]
Yang, Xiaokang [1 ]
Zhang, Chongyang [1 ]
Yang, Ming-Hsuan [2 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai 200030, Peoples R China
[2] Univ Calif, Merced, CA USA
来源
2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) | 2015年
基金
美国国家科学基金会;
关键词
temporal invariance; feature learning; correlation filters; object tracking; OBJECT;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we propose to learn temporally invariant features from a large number of image sequences to represent objects for visual tracking. These features are trained on a convolutional neural network with temporal invariance constraints and robust to diverse motion transformations. We employ linear correlation filters to encode the appearance templates of targets and perform the tracking task by searching for the maximum responses at each frame. The learned filters are updated online and adapt to significant appearance changes during tracking. Extensive experimental results on challenging sequences show that the proposed algorithm performs favorably against state-of-the-art methods in terms of efficiency, accuracy, and robustness.
引用
收藏
页码:857 / 861
页数:5
相关论文
共 50 条
  • [1] Self-taught learning of a deep invariant representation for visual tracking via temporal slowness principle
    Kuen, Jason
    Lim, Kian Ming
    Lee, Chin Poo
    PATTERN RECOGNITION, 2015, 48 (10) : 2964 - 2982
  • [2] Learning Non-local Representation for Visual Tracking
    Zhang, Peng
    Wang, Zengfu
    PATTERN RECOGNITION AND COMPUTER VISION (PRCV 2018), PT IV, 2018, 11259 : 209 - 220
  • [3] Accurate visual representation learning for single object tracking
    Bao, Hua
    Shu, Ping
    Wang, Qijun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (17) : 24059 - 24079
  • [4] RELIABLE TEMPORALLY CONSISTENT FEATURE ADAPTATION FOR VISUAL OBJECT TRACKING
    Gopal, Goutam Yelluru
    Amer, Maria A.
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2081 - 2085
  • [5] Learning spatial-temporally regularized complementary kernelized correlation filters for visual tracking
    Su, Zhenyang
    Li, Jing
    Chang, Jun
    Song, Chengfang
    Xiao, Yafu
    Wan, Jun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (33-34) : 25171 - 25188
  • [6] STGL: Spatial-Temporal Graph Representation and Learning for Visual Tracking
    Jiang, Bo
    Zhang, Yuan
    Luo, Bin
    Cao, Xiaochun
    Tang, Jin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 2162 - 2171
  • [7] Learning Local Appearances With Sparse Representation for Robust and Fast Visual Tracking
    Bai, Tianxiang
    Li, You-Fu
    Zhou, Xiaolong
    IEEE TRANSACTIONS ON CYBERNETICS, 2015, 45 (04) : 663 - 675
  • [8] Learning reinforced attentional representation for end-to-end visual tracking
    Gao, Peng
    Zhang, Qiquan
    Wang, Fei
    Xiao, Liyi
    Fujita, Hamido
    Zhang, Yan
    INFORMATION SCIENCES, 2020, 517 : 52 - 67
  • [9] Robust Visual Tracking via Incremental Subspace Learning and Local Sparse Representation
    Yang, Guoliang
    Hu, Zhengwei
    Tang, Jun
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2018, 43 (02) : 627 - 636
  • [10] Regularized Kernel Representation for Visual Tracking
    Wang, Jun
    Wang, Yuanyun
    Deng, Chengzhi
    Wang, Shengqian
    Qin, Yong
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2018, E101A (04): : 668 - 677