Online learning discriminative sparse convolution networks for robust UAV object tracking

被引:1
|
作者
Xu, Qi [1 ]
Xu, Zhuoming [1 ]
Wang, Huabin [2 ]
Chen, Yun [1 ]
Tao, Liang [2 ]
机构
[1] Hohai Univ, Coll Comp Sci & Software Engn, Nanjing 210098, Peoples R China
[2] Anhui Univ, Sch Comp Sci & Technol, Hefei 230031, Peoples R China
关键词
UAV object tracking; Online learning; Deep learning; Convolutional networks; Sparse constrains;
D O I
10.1016/j.knosys.2024.112742
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the remarkable empirical success for UAV object tracking, current convolutional networks usually have three unavoidable limitations: (1) The feature maps produced by convolutional layers are difficult to interpret. (2) The network needs to be trained offline on a large-scale auxiliary training set, resulting in the feature extraction ability of the trained network depending on the categories of the training set. (3) The performance of networks suffers from sensitivity to hyper-parameters (such as learning rate and weight decay) when the network needs online fine-tuning. To overcome the three limitations, this paper proposes a Discriminative Sparse Convolutional Network (DSCN) that exhibits good layer-wise interpretability and can be trained online without requiring any auxiliary training data. By imposing sparsity constraints on the convolutional kernels, DSCN furnishes the convolution layer with an explicit data meaning, thus enhancing the interpretability of the feature maps. These convolutional kernels are directly learned online from image blocks, which eliminates the offline training process on auxiliary data sets. Moreover, a simple yet effective online tuning method with few hyper-parameters is proposed to fine-tune fully connected layers online. We have successfully applied DSCN to UAV object tracking and conducted extensive experiments on six mainstream UAV datasets. The experimental results demonstrate that our method performs favorably against several state-of-the-art tracking algorithms in terms of tracking accuracy and robustness.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Object Tracking with Online Discriminative Sub-Instance Learning
    Tian, Peng
    2015 8TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING (CISP), 2015, : 35 - 40
  • [22] Discriminative convolution sparse coding for robust image classification
    Ali Nozaripour
    Hadi Soltanizadeh
    Multimedia Tools and Applications, 2022, 81 : 40849 - 40870
  • [23] Discriminative convolution sparse coding for robust image classification
    Nozaripour, Ali
    Soltanizadeh, Hadi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (28) : 40849 - 40870
  • [24] Object tracking with soft discriminative sparse dictionary
    Zha, Yi
    Cao, Tieyong
    Huang, Hui
    Song, Zhijun
    You, Jun
    Zha, Yi, 1600, Institute of Computing Technology (26): : 1279 - 1289
  • [25] Robust Object Tracking with Online Multiple Instance Learning
    Babenko, Boris
    Yang, Ming-Hsuan
    Belongie, Serge
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (08) : 1619 - 1632
  • [26] Robust object tracking with RGBD-based sparse learning
    Ma, Zi-ang
    Xiang, Zhi-yu
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2017, 18 (07) : 989 - 1001
  • [27] Visual object tracking via online sparse instance learning
    Yan, Jia
    Chen, Xi
    Deng, Dexiang
    Zhu, Qiuping
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2015, 26 : 231 - 246
  • [28] Online Object Tracking and Learning with Sparse Deformable Template Models
    Shi, Bowen
    Fan, Tianzhe
    Liu, Qun
    2015 8TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 2, 2015, : 67 - 70
  • [29] Robust object tracking with RGBD-based sparse learning
    Zi-ang Ma
    Zhi-yu Xiang
    Frontiers of Information Technology & Electronic Engineering, 2017, 18 : 989 - 1001
  • [30] Online Visual Object Tracking with Supervised Sparse Representation and Learning
    Bai, Tianxiang
    Li, Y. F.
    Shao, Zhanpeng
    2014 13TH INTERNATIONAL CONFERENCE ON CONTROL AUTOMATION ROBOTICS & VISION (ICARCV), 2014, : 827 - 832