Image semantic segmentation-based navigation method for UAV auto-landing

被引:0
作者
Shang K. [1 ,2 ]
Zheng X. [1 ]
Wang L. [2 ]
Hu G. [2 ]
Liu C. [2 ]
机构
[1] School of Automation, Beijing Institute of Technology, Beijing
[2] Beijing Institute of Automation Equipment, Beijing
来源
Zhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology | 2020年 / 28卷 / 05期
关键词
Image semantic segmentation; Pose estimation; Runway detection; Self-attention module;
D O I
10.13695/j.cnki.12-1222/o3.2020.05.004
中图分类号
学科分类号
摘要
A UAV auto-landing navigation method based on deep convolutional neural network image semantic segmentation is proposed for the application scenarios of UAVs auto-landing in complex electromagnetic combat environments. Firstly, a lightweight and efficient end-to-end runway detection neural network named RunwayNet is designed. In the feature extraction part ShuffleNet V2 is reformed by using void convolution to get a trunk network with adjustable output feature graph resolution. A self-attention module based on the self-attention mechanism is designed so that the network has global runway feature extraction capabilities. Secondly, a decoder module is designed by fusing the rich details, the spatial location information of the low-level layers with the rough, abstract semantic segmentation information of the high-level layers to obtain a fine runway detection output. Finally, an algorithm of edge line extraction and pose estimation based on the segmented area of runway is proposed to realize relative pose calculation. The results of simulations and airborn experiments show that the precise segmentation and recognition of the runway area during the landing of the drone can be realized by the embedded real-time computing platform. The operating distance can reach 3 km and the success rate is close to 90%. The problems of runway identification blind area and real time in the landing process is solved, and the robustness of UAV landing in complex environment is significantly improved. © 2020, Editorial Department of Journal of Chinese Inertial Technology. All right reserved.
引用
收藏
页码:586 / 594
页数:8
相关论文
共 21 条
[1]  
Williams Kevin W., A summary of unmanned aircraft accident/incident data: Human factors implications, (2004)
[2]  
Barber B, McLain T, Edwards B., Vision-based landing of fixed-wing miniature air vehicles, Aerosp. Comp. Inf. Commun, 6, pp. 207-226, (2009)
[3]  
Szondy David, Winged drone nails first autonomous landing on a moving vehicle
[4]  
Gui Y, Guo P, Zhang H, Et al., Airborne vision-based navigation method for uav accuracy landing using infrared lamps, Intell. Robot. Syst, 72, pp. 197-218, (2013)
[5]  
Miller A, Shah M, Harper D., Landing a UAV on a runway using image registration, Proceedings of the IEEE International Conference on Robotics and Automation, pp. 182-187, (2008)
[6]  
Vezinet J, Escher A C, Guillet A, Et al., State of the art of image-aided navigation techniques for aircraft approach and landing, Proceedings of the 2013 International Technical Meeting of The Institute of Navigation, pp. 473-607, (2013)
[7]  
Liu C, Liu L, Hu G, Et al., A P3P problem solving algorithm for landing vision navigation, Navigation Positioning & Timing, 5, 1, pp. 58-61, (2018)
[8]  
Tonhauser A, Schwithal S, Wolkow M, Et al., Integrity concept for image-based automated landing systems, Pacific PNT 2015, pp. 733-747, (2015)
[9]  
Chen L, Papandreou G, Kokkinos I, Et al., Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs, IEEE Transactions on Pattern Analysis & Machine Intelligence, 40, 4, (2018)
[10]  
Ma N, Zhang X, Zheng H, Et al., Shufflenet v2: Practical guidelines for efficient cnn architecture design, Proceedings of the European Conference on Computer Vision (ECCV), pp. 116-131, (2018)