A Visual Navigation System for UAV under Diverse Illumination Conditions

被引:18
作者
Hai, Jiang [1 ]
Hao, Yutong [1 ]
Zou, Fengzhu [1 ]
Lin, Fang [1 ]
Han, Songchen [1 ]
机构
[1] Sichuan Univ, Sch Aeronaut & Astronaut, Chengdu 610065, Peoples R China
关键词
HISTOGRAM EQUALIZATION; IMAGE; ENHANCEMENT; RETINEX;
D O I
10.1080/08839514.2021.1985799
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The high-precision navigation and positioning ability of UAV (Unmanned Aerial Vehicles) is the key factor to reflect its degree of automation. Visual navigation based on image matching has become one of the important research fields for UAV to realize autonomous navigation, because of its low cost, strong anti-jamming ability, and good location result. However, the visual quality of images captured by UAV will be seriously affected by some factors, such as weak illumination conditions or insufficient performance of its sensors. Resolving a series of degradation of low-light images can improve the visual quality and enhance the performance of UAV visual navigation. In this paper, we propose a novel fully convolutional network based on the Retinex theory to solve the degradations of low-light images captured by UAV, which can improve the visual quality of the images and visual matching performance effectively. At the same time, a visual navigation system is designed based on the proposed network. Extensive experiments demonstrate that our method outperforms the existing methods by a large margin both quantitatively and qualitatively, and effectively improves the performance of the image matching algorithms. The visual navigation system can successfully realize the self-localization of UAV under different illumination conditions. Moreover, we also prove that our method is also effective in other practical tasks (e.g. autonomous driving).
引用
收藏
页码:1529 / 1549
页数:21
相关论文
共 34 条
[1]  
[Anonymous], 2015, ACS SYM SER
[2]   Speeded-Up Robust Features (SURF) [J].
Bay, Herbert ;
Ess, Andreas ;
Tuytelaars, Tinne ;
Van Gool, Luc .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2008, 110 (03) :346-359
[3]   GMS: Grid-based Motion Statistics for Fast, Ultra-robust Feature Correspondence [J].
Bian, JiaWang ;
Lin, Wen-Yan ;
Matsushita, Yasuyuki ;
Yeung, Sai-Kit ;
Nguyen, Tan-Dat ;
Cheng, Ming-Ming .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2828-2837
[4]   Real-time Quadrifocal Visual Odometry [J].
Comport, A. I. ;
Malis, E. ;
Rives, P. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2010, 29 (2-3) :245-266
[5]  
Depaola R., 2018, 2018 AIAA GUID NAV C
[6]   Image Super-Resolution Using Deep Convolutional Networks [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) :295-307
[7]  
Dong J.-W., 2017, ANAL INERTIAL NAVIGA, V1st
[8]   A weighted variational model for simultaneous reflectance and illumination estimation [J].
Fu, Xueyang ;
Zeng, Delu ;
Huang, Yue ;
Zhang, Xiao-Ping ;
Ding, Xinghao .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2782-2790
[9]   A fusion-based enhancing method for weakly illuminated images [J].
Fu, Xueyang ;
Zeng, Delu ;
Huang, Yue ;
Liao, Yinghao ;
Ding, Xinghao ;
Paisley, John .
SIGNAL PROCESSING, 2016, 129 :82-96
[10]  
Golden J.P., 1980, INT SOC OPTICS PHOTO, V238, P10, DOI 10.1117/12.959127