Real-time and robust object tracking method in frequency domain space

被引:0
|
作者
Wang R. [1 ]
Chen Y. [2 ]
Ma S. [1 ]
Lyu J. [1 ]
机构
[1] School of Computer Science and Engineering, Beijing University of Aeronautics and Astronautics, Beijing
[2] Guangdong Nanfang Vocational College, Jiangmen
来源
Lyu, Jianghua (jhlv@nlsde.buaa.edu.cn) | 2017年 / Beijing University of Aeronautics and Astronautics (BUAA)卷 / 43期
基金
中国国家自然科学基金;
关键词
Computer vision; Dense circulation sampling; Energy minimization method; Frequency domain space; Object tracking;
D O I
10.13700/j.bh.1001-5965.2016.0906
中图分类号
学科分类号
摘要
This paper addresses real-time and robust object tracking method. In this paper, dense circulation sampling and frequency domain transform method were used in target tracking processing. This paper proposed energy minimization object tracking method in frequency domain space and put forward the concept of dense circulation sampling to solve object shape changes, appearance changes, object orientation changes, scene illumination changes, video jitter, objective scale changes and object occlusion problems in tracking processing. This method calculates a target by ten adjacent frames and circulation matrix in frequency domain space. This algorithm defines error as an energy function. This method proposed frequency domain energy minimum method firstly. Energy minimization make error between target and ground truth minimize. This algorithm can obtain more precision target results rapidly, so data quantity is sharp decreased. This algorithm use the dense circulation sampling and energy minimization method to implement a stable visual tracking in such situation as target orientation deformation, scene illumination changes, video stabilization, target scale transformation, target part occlusion. Compared with the latest and the best performance methods at present, the proposed method has significantly improved the tracking precision and efficiency. © 2017, Editorial Board of JBUAA. All right reserved.
引用
收藏
页码:2457 / 2465
页数:8
相关论文
共 36 条
  • [1] Supancic J.S., Ramanan D., Self-paced learning for long-term tracking, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2379-2386, (2013)
  • [2] Wu Y., Lim J., Yang M.H., Online object tracking:A benchmark, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2411-2418, (2013)
  • [3] Grabner H., Bischof H., On-line boosting and vision, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 260-267, (2006)
  • [4] Babenko B., Yang M.H., Belongie S., Visual tracking with online multiple instance learning, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 983-990, (2009)
  • [5] Hare S., Golodetz S., Saffari A., Et al., Struck:Structured output tracking with kernels, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 10, pp. 2096-2109, (2016)
  • [6] Ross D.A., Lim J., Lin R.S., Et al., Incremental learning for robust visual tracking, International Journal of Computer Version, 77, 1-3, pp. 125-141, (2008)
  • [7] Mei X., Ling H., Robust visual tracking and vehicle classification via sparse representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 33, 11, pp. 2259-2272, (2011)
  • [8] Liu B., Huang J., Kulikowski C., Et al., Robust visual tracking using local sparse appearance model and k-selection, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 12, pp. 2968-2981, (2013)
  • [9] Li M., Ma F., Nian F., Robust visual tracking via appearance modeling and sparse representation, Journal of Computers, 9, 7, pp. 1612-1619, (2014)
  • [10] Hua Y., Alahari K., Schmid C., Occlusion and motion reasoning for long-term tracking, Proceedings of European Conference on Computer Vision, pp. 172-187, (2014)