N-Cameras-Enabled Joint Pose Estimation for Auto-Landing Fixed-Wing UAVs

被引:3
作者
Tang, Dengqing [1 ]
Shen, Lincheng [1 ]
Xiang, Xiaojia [1 ]
Zhou, Han [1 ]
Lai, Jun [1 ]
机构
[1] Natl Univ Def Technol, Coll Intelligence Sci & Technol, Changsha 410073, Peoples R China
关键词
pose estimation; auto-landing fixed-wing UAVs; ground vision system; block convolutional neural networks; VEHICLE; VISION; FEATURES; SYSTEM;
D O I
10.3390/drones7120693
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
We propose a novel 6D pose estimation approach tailored for auto-landing fixed-wing unmanned aerial vehicles (UAVs). This method facilitates the simultaneous tracking of both position and attitude using a ground-based vision system, regardless of the number of cameras (N-cameras), even in Global Navigation Satellite System-denied environments. Our approach proposes a pipeline consisting of a Convolutional Neural Network (CNN)-based detection of UAV anchors which, in turn, drives the estimation of UAV pose. In order to ensure robust and precise anchor detection, we designed a Block-CNN architecture to mitigate the influence of outliers. Leveraging the information from these anchors, we established an Extended Kalman Filter to continuously update the UAV's position and attitude. To support our research, we set up both monocular and stereo outdoor ground view systems for data collection and experimentation. Additionally, to expand our training dataset without requiring extra outdoor experiments, we created a parallel system that combines outdoor and simulated setups with identical configurations. We conducted a series of simulated and outdoor experiments. The results show that, compared with the baselines, our method achieves 3.0% anchor detection precision improvement and 19.5% and 12.7% accuracy improvement of position and attitude estimation. Furthermore, these experiments affirm the practicality of our proposed architecture and algorithm, meeting the stringent requirements for accuracy and real-time capability in the context of auto-landing fixed-wing UAVs.
引用
收藏
页数:18
相关论文
共 41 条
[1]   Fusing Vision and Inertial Sensors for Robust Runway Detection and Tracking [J].
Abu-Jbara, Khaled ;
Sundaramorthi, Ganesh ;
Claudel, Christian .
JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 2018, 41 (09) :1929-1946
[2]  
Bochkovskiy A, 2020, Arxiv, DOI [arXiv:2004.10934, DOI 10.48550/ARXIV.2004.10934]
[3]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[4]   An Integrated UWB-IMU-Vision Framework for Autonomous Approaching and Landing of UAVs [J].
Dong, Xin ;
Gao, Yuzhe ;
Guo, Jinglong ;
Zuo, Shiyu ;
Xiang, Jinwu ;
Li, Daochun ;
Tu, Zhan .
AEROSPACE, 2022, 9 (12)
[5]   Robust Pose Estimation for Multirotor UAVs Using Off-Board Monocular Vision [J].
Fu, Qiang ;
Quan, Quan ;
Cai, Kai-Yuan .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2017, 64 (10) :7942-7951
[6]  
Garforth J, 2019, IEEE INT CONF ROBOT, P1794, DOI [10.1109/ICRA.2019.8793771, 10.1109/icra.2019.8793771]
[7]  
Gui Y., 2013, Ph.D. Thesis
[8]   Accurate IMU Factor Using Switched Linear Systems for VIO [J].
Henawy, John ;
Li, Zhengguo ;
Yau, Wei-Yun ;
Seet, Gerald .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2021, 68 (08) :7199-7208
[9]   Landing a VTOL Unmanned Aerial Vehicle on a Moving Platform Using Optical Flow [J].
Herisse, Bruno ;
Hamel, Tarek ;
Mahony, Robert ;
Russotto, Francois-Xavier .
IEEE TRANSACTIONS ON ROBOTICS, 2012, 28 (01) :77-89
[10]   Optical-flow based self-supervised learning of obstacle appearance applied to MAV landing [J].
Ho, H. W. ;
De Wagter, C. ;
Remes, B. D. W. ;
de Croon, G. C. H. E. .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2018, 100 :78-94