A New Fisheye Video Target Tracking Method by Integrating Response Template and Multiple Features

被引:0
作者
Zhou X. [1 ,2 ]
Huang C. [2 ]
Shao Z. [2 ]
Chen S. [2 ,3 ]
Lei B. [4 ]
机构
[1] College of Electrical and Information Engineering, Quzhou University, Quzhou
[2] College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou
[3] School of Computer Communication and Engineering, Tianjin University of Technology, Tianjin
[4] Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering, China Three Gorges University, Yichang
来源
Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics | 2019年 / 31卷 / 07期
关键词
Correlation filter; Fisheye video; Image distortion; Target tracking;
D O I
10.3724/SP.J.1089.2019.17442
中图分类号
TN911 [通信理论];
学科分类号
081002 ;
摘要
The wide use of fisheye cameras makes target tracking on fisheye video get increasing attention. However, the serious distortion caused by the special imaging principle of the fisheye lens brings the target tracking negative effect. Aiming at weakening the interference of the distortion, this paper proposes a novel fisheye video target tracking method based on response template and feature integration. Firstly, the proposed method synthesizes the response template based on the responses of multiple samples as well as constructs a classifier based on the response template, and then extracts the object's HoG feature and Color Name feature respectively to train the corresponding classifiers. The responses of two classifiers are considered jointly to determine the target location. For further optimizing the tracker, imaging model is used to correct the deformed target before the training of the classifier. Finally, the evaluation results on the constructed fisheye video dataset validate that the proposed method can greatly reduce the negative impact of the image distortion and the target deformation while keeping the real-time performance. © 2019, Beijing China Science Journal Publishing Co. Ltd. All right reserved.
引用
收藏
页码:1067 / 1074
页数:7
相关论文
共 25 条
  • [1] Dooley D., McGinley B., Hughes C., Et al., A blind-zone detection method using a rear-mounted fisheye camera with combination of vehicle detection methods, IEEE Transactions on Intelligent Transportation Systems, 17, 1, pp. 264-278, (2016)
  • [2] Qin X.B., Li S.G., Tracking feature points of fisheye full-view image by normalized image patch, IEEJ Transactions on Electronics, Information and Systems, 132, 9, pp. 1516-1523, (2012)
  • [3] Zhou X.L., Li J.W., Chen S.Y., Et al., Multiple perspective object tracking via context-aware correlation filter, IEEE Access, 6, pp. 43262-43273, (2018)
  • [4] Zhou X., Shen H., He B., Birth intensity estimation method for multi-target video tracking, Journal of Computer-Aided Design & Computer Graphics, 26, 12, pp. 2223-2231, (2014)
  • [5] Tsai F.S., Hsu S.Y., Shih M.H., Adaptive tracking control for robots with an interneural computing scheme, IEEE Transactions on Neural Networks and Learning Systems, 29, 4, pp. 832-844, (2018)
  • [6] Cifuentes C.G., Issac J., Wuthrich M., Et al., Probabilistic articulated real-time tracking for robot manipulation, IEEE Robotics and Automation Letters, 2, 2, pp. 577-584, (2017)
  • [7] Li J., Zhou X., Chan S., Et al., A novel video target tracking method based on adaptive convolutional neural network feature, Journal of Computer-Aided Design & Computer Graphics, 30, 2, pp. 273-281, (2018)
  • [8] Zhou X.L., Li Y.F., He B.W., Et al., GM-PHD-based multi-target visual tracking using entropy distribution and game theory, IEEE Transactions on Industrial Informatics, 10, 2, pp. 1064-1076, (2014)
  • [9] Bolme D.S., Ross Beveridge J., Draper B.A., Et al., Visual object tracking using adaptive correlation filters, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2544-2550, (2010)
  • [10] Henriques J.F., Caseiro R., Martins P., Et al., Exploiting the circulant structure of tracking-by-detection with kernels, Proceedings of European Conference on Computer Vision, pp. 702-715, (2012)