Pose Estimation Method for Non-Cooperative Target Based on Deep Learning

被引:8
作者
Deng, Liwei [1 ]
Suo, Hongfei [1 ]
Jia, Youquan [1 ]
Huang, Cheng [1 ]
机构
[1] Harbin Univ Sci & Technol, Sch Automation, Heilongjiang Prov Key Lab Complex Intelligent Syst, Harbin 150080, Peoples R China
基金
美国国家科学基金会;
关键词
non-cooperative YOLOv5s; scSE-LHRNet; target recognition; pose estimation;
D O I
10.3390/aerospace9120770
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
The scientific research strength in the aerospace field has become an essential criterion for measuring a country's scientific and technological level and comprehensive national power, but in the grand scheme, many factors are beyond human control. As is known, the difficulty with non-cooperative target intersection docking is its failure to provide attitude information autonomously. The existing non-cooperative target poses estimation methods with low accuracy and high resource consumption. This paper proposes a deep-learning-based pose estimation method for solving these problems. The proposed pose estimation method consists of two distinctly innovative works. You Only Look Once v5 (YOLOv5) is an innovative and lightweight network that is used to pre-recognize non-cooperative targets. Another part introduces concurrent space and channel compressor theory modules in a lightweight High-Resolution Network (HRNet) to extend its advantages in real-time, and hence proposes a spatial and channel Squeeze and Excitation-Lightweight High-Resolution Network (scSE-LHRNet) network for pose estimation. To verify the superiority of the proposed network, experiments were conducted on a publicly available dataset with multiple evaluation metrics to compare and analyze existing methods. The experimental results show that the proposed pose estimation method dramatically reduces the complexity of the model, effectively decreases the amount of computation, and achieves significant pose estimation results.
引用
收藏
页数:14
相关论文
共 32 条
[1]  
ACT, SLAB POS EST CHALL
[2]   Real-Time Head Orientation from a Monocular Camera Using Deep Neural Network [J].
Ahn, Byungtae ;
Park, Jaesik ;
Kweon, In So .
COMPUTER VISION - ACCV 2014, PT III, 2015, 9005 :82-96
[3]  
[Anonymous], ISELL LABELIMG
[4]   Weapons Detection for Security and Video Surveillance Using CNN and YOLO-V5s [J].
Ashraf, Abdul Hanan ;
Imran, Muhammad ;
Qahtani, Abdulrahman M. ;
Alsufyani, Abdulmajeed ;
Almutiry, Omar ;
Mahmood, Awais ;
Attique, Muhammad ;
Habib, Mohamed .
CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 70 (02) :2761-2775
[5]  
[包为民 Bao Weimin], 2013, [自动化学报, Acta Automatica Sinica], V39, P697
[6]   Speeded-Up Robust Features (SURF) [J].
Bay, Herbert ;
Ess, Andreas ;
Tuytelaars, Tinne ;
Van Gool, Luc .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2008, 110 (03) :346-359
[7]   Attitude recovery from feature tracking for estimating angular rate of non-cooperative spacecraft [J].
Biondi, G. ;
Mauro, S. ;
Mohtar, T. ;
Pastorelli, S. ;
Sorli, M. .
MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2017, 83 :321-336
[8]   BRIEF: Binary Robust Independent Elementary Features [J].
Calonder, Michael ;
Lepetit, Vincent ;
Strecha, Christoph ;
Fua, Pascal .
COMPUTER VISION-ECCV 2010, PT IV, 2010, 6314 :778-792
[9]   Review of the robustness and applicability of monocular pose estimation systems for relative navigation with an uncooperative spacecraft [J].
Cassinis, Lorenzo Pasqualetto ;
Fonod, Robert ;
Gill, Eberhard .
PROGRESS IN AEROSPACE SCIENCES, 2019, 110
[10]   Satellite Pose Estimation with Deep Landmark Regression and Nonlinear Pose Refinement [J].
Chen, Bo ;
Cao, Jiewei ;
Parra, Alvaro ;
Chin, Tat-Jun .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, :2816-2824