Visual Pose Estimation of Rescue Unmanned Surface Vehicle From Unmanned Aerial System

被引:9
作者
Dufek, Jan [1 ]
Murphy, Robin [1 ]
机构
[1] Texas A&M Univ, Dept Comp Sci & Engn, College Stn, TX 77843 USA
基金
美国国家科学基金会;
关键词
visual pose estimation; visual localization; heterogenous multi-robot team; search and rescue robotics; field robotics; computer vision; marine robotics; aerial robotics; TARGET TRACKING; MOBILE ROBOTS; ONBOARD; UAV;
D O I
10.3389/frobt.2019.00042
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This article addresses the problem of how to visually estimate the pose of a rescue unmanned surface vehicle (USV) using an unmanned aerial system (UAS) in marine mass casualty events. A UAS visually navigating the USV can help solve problems with teleoperation and manpower requirements. The solution has to estimate full pose (both position and orientation) and has to work in an outdoor environment from oblique view angle (up to 85 degrees from nadir) at large distances (180 m) in real-time (5 Hz) and assume both moving UAS (up to 22 m s(-1)) and moving object (up to 10 m s(-1)). None of the 58 reviewed studies satisfied all those requirements. This article presents two algorithms for visual position estimation using the object's hue (thresholding and histogramming) and four techniques for visual orientation estimation using the object's shape while satisfying those requirements. Four physical experiments were performed to validate the feasibility and compare the thresholding and histogramming algorithms. The histogramming had statistically significantly lower position estimation error compared to thresholding for all four trials (p-value ranged from similar to 0 to 8.23263 x 10(-29)), but it only had statistically significantly lower orientation estimation error for two of the trials (p-values 3.51852 x 10(-39) and 1.32762 x 10(-46)). The mean position estimation error ranged from 7 to 43 px while the mean orientation estimation error ranged from 0.134 to 0.480 rad. The histogramming algorithm demonstrated feasibility for variations in environmental conditions and physical settings while requiring fewer parameters than thresholding. However, three problems were identified. The orientation estimation error was quite large for both algorithms, both algorithms required manual tuning before each trial, and both algorithms were not robust enough to recover from significant changes in illumination conditions. To reduce the orientation estimation error, inverse perspective warping will be necessary to reduce the perspective distortion. To eliminate the necessity for tuning and increase the robustness, a machine learning approach to pose estimation might ultimately be a better solution.
引用
收藏
页数:20
相关论文
共 78 条
[1]   Principal component analysis [J].
Abdi, Herve ;
Williams, Lynne J. .
WILEY INTERDISCIPLINARY REVIEWS-COMPUTATIONAL STATISTICS, 2010, 2 (04) :433-459
[2]  
[Anonymous], COOPERATIVE CONTROL
[3]  
[Anonymous], 30 C INT COUNC AER S
[4]  
[Anonymous], UAV UGV COOPERATION
[5]  
[Anonymous], 2018 BRIT MACH VIS C
[6]  
[Anonymous], COMP VIS ECCV 15 EUR
[7]  
[Anonymous], 30 C INT COUNC AER S
[8]  
[Anonymous], 2017 IEEE UNDERWATER
[9]  
[Anonymous], 2015 IEEE AER C BIG
[10]  
[Anonymous], ARXIV171002932