Visual Object Tracking Algorithm Based on Foreground Optimization

被引:0
|
作者
Xie Q.-S. [1 ]
Liu X.-Q. [2 ]
An Z.-Y. [1 ]
Li B. [1 ]
机构
[1] School of Computer Science and Technology, Shandong Technology and Business University, Shandong, Yantai
[2] School of Information and Electronic Engineering, Shandong Technology and Business University, Shandong, Yantai
来源
关键词
angle optimization; foreground optimization; object segmentation; object tracking; scale optimization;
D O I
10.12263/DZXB.20210641
中图分类号
学科分类号
摘要
The introduction of object segmentation technology into the tracking field is a current research hotspot. At present, the tracking algorithm based on segmentation often calculates the minimum bounding rectangle as the bounding box according to the segmentation result. However, the complex target movement makes the bounding box contain more background, which leads to a decrease in accuracy. In response to the problem, this paper proposes a visual object tracking algorithm based on foreground optimization, which unifies the optimization of the scale and angle in the bounding box into the foreground optimization frame. First, the foreground ratio in the bounding box is evaluated. If it is less than the set threshold, the scale and angle of the bounding box are optimized; in the scale optimization module, the conditional probability of the bounding box is calculated in combination with the regression box, and the scale is optimized according to the results of the conditional probability; in the angle optimization module, many deviation angles are set for the bounding box, and the optimal bounding box angle is chosen by the foreground IoU (Intersection over Union) maximum strategy. The proposed method is applied to the SiamMask algorithm. Results show that the accuracy is improved by about 3.2%, 3.7% and 3.6% in the VOT2016, VOT2018 and VOT2019 data sets, respectively, while EAO is increased by about 1.8%, 1.9% and 1.6%, respectively. Moreover, our method has a certain universality for segmentation-based tracking algorithms. © 2022 Chinese Institute of Electronics. All rights reserved.
引用
收藏
页码:1558 / 1566
页数:8
相关论文
共 44 条
  • [31] CHEN Z, ZHONG B, LI G, Et al., Siamese box adaptive network for visual tracking, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6667-6676, (2020)
  • [32] HE A, LUO C, TIAN X, Et al., A twofold Siamese network for real-time object tracking, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4834-4843, (2018)
  • [33] DU Y, LIU P, ZHAO W, Et al., Correlation-guided attention for corner detection based visual tracking, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6835-6844, (2020)
  • [34] REN S, HE K, GIRSHICK R, Et al., Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 6, pp. 1137-1149, (2017)
  • [35] CHEN B, TSOTSOS JOHN K., Fast visual object tracking using ellipse fitting for rotated bounding boxes, 2019 IEEE/CVF International Conference on Computer Vision Workshop(ICCVW), pp. 2281-2289, (2019)
  • [36] ZHU Z, WANG Q, LI B, Et al., Distractor-aware Siamese networks for visual object tracking, Proceedings of the IEEE European Conference on Computer Vision, pp. 103-119, (2018)
  • [37] WANG G, LUO C, XIONG Z, Et al., SPM-tracker: Series-parallel matching for real-time visual object tracking, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3638-3647, (2019)
  • [38] VOIGTLAENDER P, LUITENET J, Et al., Siam R-CNN: Visual tracking by re-detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6577-6587, (2020)
  • [39] ZHANG Z, PENG H., Deeper and wider Siamese networks for real-time visual tracking, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4586-4595, (2019)
  • [40] LI X, MA C, WU B, Et al., Target-aware deep tracking [C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1369-1378, (2019)