Siamese Visual Tracking with Robust Adaptive Learning

被引:0
|
作者
Zhang, Wancheng [1 ]
Chen, Zhi [1 ]
Liu, Peizhong [2 ,3 ]
Deng, Jianhua [2 ]
机构
[1] Huaqiao Univ, Coll Engn, Quanzhou 362000, Peoples R China
[2] Quanzhou Zhongfang Hongye Informat Technol Co LTD, Quanzhou 362000, Peoples R China
[3] Fujian Prov Big Data Res Inst Intelligent Mfg, Quanzhou 362000, Peoples R China
关键词
visual tracking; siamese network; daptive feature fusion; model update;
D O I
10.1109/icasid.2019.8925141
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Correlation filters and deep learning methods are the two mainly directions in the research of visual tracking. However, these trackers do not balance accuracy and speed very well at the same time. The application of the Siamese networks brings great improvement in accuracy and speed, and an increasing number of researchers are paying attention to this aspect. In the paper, based on the Siamese networks model, we propose a robust adaptive learning visual tracking algorithm. HOG features, CN features and deep convolution features are extracted from the template frame and search region frame respectively, and we analyze the merits of each feature and perform feature adaptive fusion to improve the validity of feature representation. Then, we update the two branch models with two learning change factors and realize a more similar match to locate the target. Besides, we propose a model update strategy that employs the average peak-to-correlation energy (APCE) to determinate whether to update the learning change factors to improve the accuracy of tracking model and reduce the tracking drift in the case of tracking failure, deformation or background blur etc. Extensive experiments on the benchmark datasets (OTB-50, OTB-100) demonstrate that our visual tracking algorithm performs better than several state-of-the-art trackers for accuracy and robustness.
引用
收藏
页码:153 / 157
页数:5
相关论文
共 50 条
  • [31] SiamPAT: Siamese point attention networks for robust visual tracking
    Chen, Hang
    Zhang, Weiguo
    Yan, Danghui
    JOURNAL OF ELECTRONIC IMAGING, 2021, 30 (05)
  • [32] Siamese Adaptive Network-Based Accurate and Robust Visual Object Tracking Algorithm for Quadrupedal Robots
    Cao, Zhengcai
    Li, Junnian
    Shao, Shibo
    Zhang, Dong
    Zhou, Mengchu
    IEEE TRANSACTIONS ON CYBERNETICS, 2025,
  • [33] Learning Dynamic Siamese Network for Visual Object Tracking
    Guo, Qing
    Feng, Wei
    Zhou, Ce
    Huang, Rui
    Wan, Liang
    Wang, Song
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1781 - 1789
  • [34] Learning Rotation Adaptive Correlation Filters in Robust Visual Object Tracking
    Rout, Litu
    Raju, Priya Mariam
    Mishra, Deepak
    Gorthi, Rama Krishna Sai Subrahmanyam
    COMPUTER VISION - ACCV 2018, PT II, 2019, 11362 : 646 - 661
  • [35] Deep learning assisted robust visual tracking with adaptive particle filtering
    Qian, Xiaoyan
    Han, Lei
    Wang, Yuedong
    Ding, Meng
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2018, 60 : 183 - 192
  • [36] Improved Siamese classification and regression adaptive network for visual tracking
    Dou, Kaiqi
    Zhu, Fuzhen
    Cui, Jingyi
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2022, 43 (11) : 4134 - 4150
  • [37] Adaptive Framework for Robust Visual Tracking
    Abdelpakey, Mohamed H.
    Shehata, Mohamed S.
    Mohamed, Mostafa M.
    Gong, Minglun
    IEEE ACCESS, 2018, 6 : 55273 - 55283
  • [38] Siamese Network Based Features Fusion for Adaptive Visual Tracking
    Guo, Dongyan
    Zhao, Weixuan
    Cui, Ying
    Wang, Zhenhua
    Chen, Shengyong
    Zhang, Jian
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2018, 11012 : 759 - 771
  • [39] Adaptive NormalHedge for robust visual tracking
    Zhang, Shengping
    Zhou, Huiyu
    Yao, Hongxun
    Zhang, Yanhao
    Wang, Kuanquan
    Zhang, Jun
    SIGNAL PROCESSING, 2015, 110 : 132 - 142
  • [40] Robust Visual Tracking Algorithm Based on Siamese Network with Dual Templates
    Hou Zhiqiang
    Chen Lilin
    Yu Wangsheng
    Ma Sugang
    Fan Jiulun
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2019, 41 (09) : 2247 - 2255