CSLT: Contourlet-Based Siamese Learning Tracker for Dim and Small Targets in Satellite Videos

被引:2
作者
Wu, Yinan [1 ]
Jiao, Licheng [1 ]
Liu, Fang [1 ]
Pi, Zhaoliang [1 ]
Liu, Xu [1 ]
Li, Lingling [1 ]
Yang, Shuyuan [1 ]
机构
[1] Xidian Univ, Key Lab Intelligent Percept & Image Understanding, Minist Educ, Int Res Ctr Intelligent Percept & Comp,Joint Int R, Xian 710071, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2023年 / 61卷
基金
中国国家自然科学基金;
关键词
Videos; Feature extraction; Satellites; Target tracking; Object tracking; Task analysis; Radar tracking; Contourlet transform (CT); deep learning (DL); model drift; multiresolution; object tracking; remote sensing; OBJECT TRACKING; NETWORKS;
D O I
10.1109/TGRS.2023.3325997
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Most popular visual trackers for natural scenarios always adopt handcraft features or deep features to track the target in a video. However, they face difficulties in discriminative feature representation and usually suffer from severe model drift for satellite videos, especially when encountering challenges of dim and small targets, low contrast, or similar target interference. To overcome these difficulties, we propose a contourlet-based Siamese learning tracker (CSLT), which mainly aims at tracking dim and small objects in satellite videos. In contrast to conventional methods, the contourlet transform (CT) enriches directional multiresolution information which is crucial to discriminative feature representation for dim and small targets in satellite video frames that lack distinguishable appearance features. We jointly use multiresolution features with deep features by spatial-attention fusion strategy and then track the targets by a Siamese structure network. To further improve the accuracy and robustness, a model drift alarm and calibration (MDC) module, including translation drifting penalty and rotation drifting penalty, is employed during tracking. We conduct extensive comparisons with 16 popular state-of-the-art trackers on three satellite video datasets. The experimental results validate the effectiveness of the proposed tracker.
引用
收藏
页数:13
相关论文
共 55 条
  • [1] DensSiam: End-to-End Densely-Siamese Network with Self-Attention Model for Object Tracking
    Abdelpakey, Mohamed H.
    Shehata, Mohamed S.
    Mohamed, Mostafa M.
    [J]. ADVANCES IN VISUAL COMPUTING, ISVC 2018, 2018, 11241 : 463 - 473
  • [2] [Anonymous], 1999, A Wavelet Tour of Signal Processing
  • [3] Ao W., 2018, arXiv
  • [4] Needles in a Haystack: Tracking City-Scale Moving Vehicles From Continuously Moving Satellite
    Ao, Wei
    Fu, Yanwei
    Hou, Xiyue
    Xu, Feng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 1944 - 1957
  • [5] Staple: Complementary Learners for Real-Time Tracking
    Bertinetto, Luca
    Valmadre, Jack
    Golodetz, Stuart
    Miksik, Ondrej
    Torr, Philip H. S.
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1401 - 1409
  • [6] Fully-Convolutional Siamese Networks for Object Tracking
    Bertinetto, Luca
    Valmadre, Jack
    Henriques, Joao F.
    Vedaldi, Andrea
    Torr, Philip H. S.
    [J]. COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 : 850 - 865
  • [7] Bolme DS, 2010, PROC CVPR IEEE, P2544, DOI 10.1109/CVPR.2010.5539960
  • [8] Siamese Box Adaptive Network for Visual Tracking
    Chen, Zedu
    Zhong, Bineng
    Li, Guorong
    Zhang, Shengping
    Ji, Rongrong
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6667 - 6676
  • [9] Remote Sensing Object Tracking With Deep Reinforcement Learning Under Occlusion
    Cui, Yanyu
    Hou, Biao
    Wu, Qian
    Ren, Bo
    Wang, Shuang
    Jiao, Licheng
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [10] ECO: Efficient Convolution Operators for Tracking
    Danelljan, Martin
    Bhat, Goutam
    Khan, Fahad Shahbaz
    Felsberg, Michael
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6931 - 6939