Siamese network based satellite component tracking

被引:0
|
作者
Sun Y.-D. [1 ,2 ,3 ]
Wan X. [1 ,2 ,3 ]
Li S.-Y. [1 ,2 ,3 ]
机构
[1] University of Chinese Academy of Sciences, Beijing
[2] Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing
[3] Key Laboratory of Space Utilization, Chinese Academy of Sciences, Beijing
关键词
Deep learning; Object tracking; Siamese network; Spacecraft component;
D O I
10.37188/OPE.20212912.2915
中图分类号
学科分类号
摘要
To meet the requirements for precise positioning of spacecraft components during space missions, this paper proposes a spacecraft component tracking algorithm based on a Siamese neural network. The proposed approach solves the common problem of confusing similar components. First, the spacecraft component tracking problem was modeled by training with data via the neural network; the Siamese network was designed by improving the AlexNet network. A large public dataset GOT-10k was used to train the Siamese network. Stochastic gradient descent was then used to optimize the network. Finally, to eliminate the positioning confusion occasioned by the resemblance of similar parts of the spacecraft, a tracking strategy combining motion sequence characteristics was developed to improve the tracking accuracy. The spacecraft video data published by ESA was used to test the proposed algorithm. The experimental results show that the proposed algorithm, without using spacecraft related data for training, achieves 57.2% and 73.1% of the intersection ratio of the tracking results between the cabin and solar panel, and the speed reaches 38 FPS. This demonstrates that the proposed method can meet the requirements of stable and reliable tracking of spacecraft components with high precision and strong anti-interference. © 2021, Science Press. All right reserved.
引用
收藏
页码:2915 / 2923
页数:8
相关论文
共 19 条
  • [1] CHEN B, CAO J W, PARRA A, Et al., Satellite pose estimation with deep landmark regression and nonlinear pose refinement, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2728, pp. 2816-2824, (2019)
  • [2] You S, Zhu H, Li M, Et al., A Review of Visual Trackers and Analysis of its Application to Mobile Robot, (2019)
  • [3] LI B, WU W, WANG Q, Et al., SiamRPN++: evolution of Siamese visual tracking with very deep networks, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1520, pp. 4277-4286, (2019)
  • [4] MARVASTI-ZADEH S M, CHENG L, GHANEI-YAKHDAN H, Et al., Deep learning for visual tracking: a comprehensive survey, IEEE Transactions on Intelligent Transportation Systems, 6478, 99, pp. 1-26
  • [5] ZOU Z X, SHI Z W, GUO Y H, Et al., Object detection in 20 years: a survey, (2019)
  • [6] ZHAO H G, WANG P, DONG CH, Et al., Ship detection based on the multi-scale visual saliency model, Opt. Precision Eng, 28, 6, pp. 1395-1403, (2020)
  • [7] GU Y, LIU J, SHEN H H, Et al., Infrared dim-small target detection based on an improved multiscale fractal feature, Opt. Precision Eng, 28, 6, pp. 1375-1386, (2020)
  • [8] WANG J L, FU X S, HUANG ZH CH, Et al., Multi-type cooperative targets detection using improved YOLOv2 convolutional neural network, Opt. Precision Eng, 28, 1, pp. 251-260, (2020)
  • [9] BAR-SHALOM Y, FORTMANN T E, CABLE P G., Tracking and data association, The Journal of the Acoustical Society of America, 87, 2, pp. 918-919, (1990)
  • [10] COMANICIU D, MEER P., Mean shift: a robust approach toward feature space analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 5, pp. 603-619, (2002)