Autonomous Multirotor UAV Search and Landing on Safe Spots Based on Combined Semantic and Depth Information From an Onboard Camera and LiDAR

被引:4
作者
Lim, Jeonggeun [1 ]
Kim, Myeonggyun [1 ]
Yoo, Hyungwook [1 ]
Lee, Jongho [1 ]
机构
[1] Gwangju Inst Sci & Technol, Sch Mech Engn, Gwangju 61005, South Korea
关键词
Cameras; Laser radar; Vectors; Point cloud compression; Semantics; Hardware; Mechatronics; Autonomous landing; autonomous vehicle; depth map; landing spot; multirotor; semantic segmentation; slope extraction; SYSTEM;
D O I
10.1109/TMECH.2024.3369028
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Autonomous or manual aerial vehicles should be able to land safely after conducting missions. While human pilots can determine safe landing spots for manned or remote-controlled aerial vehicles, unmanned aerial vehicles (UAVs) need to autonomously evaluate their surrounding environments to land safely. In this article, we present fully autonomous strategies for searching for safe landable spots and landing. This approach combines sensor readings from a camera with light detection and rangings (LiDARs) data. The class-wise complementary criteria enables safe landable regions to be determined, based on slope extraction from the LiDAR points cloud and semantic segmentation from deep learning using camera images. All the required components including algorithms, heterogeneous sensors, and processors were implemented on a multirotor UAV for standalone operation. Real-time outdoor experiments demonstrated fully autonomous search and landing on safe spots in various environments that included water, grass, trees, and shadows.
引用
收藏
页码:3960 / 3970
页数:11
相关论文
共 2 条
[1]  
Institute of Computer Graphics and Vision, 2019, Semantic Drone Dataset
[2]   Automated Evaluation of Semantic Segmentation Robustness for Autonomous Driving [J].
Zhou, Wei ;
Berrio, Julie Stephany ;
Worrall, Stewart ;
Nebot, Eduardo .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (05) :1951-1963