Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation

被引:7
作者
Tibebu, Haileleol [1 ]
De-Silva, Varuna [1 ]
Artaud, Corentin [1 ]
Pina, Rafael [1 ]
Shi, Xiyu [1 ]
机构
[1] Loughborough Univ London, Inst Digital Technol, 3 Lesney Ave, London E20 3BS, England
基金
英国工程与自然科学研究理事会;
关键词
glass detection; occupancy grid mapping; LiDAR noise reduction; localisation; POSE ESTIMATION; ODOMETRY; VISION; ROBUST;
D O I
10.3390/s22208021
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Recent deep learning frameworks draw strong research interest in application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on single-sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2 using multiple sensors, including camera, light detecting and ranging (LiDAR), ultrasound, e-compass and rotary encoder. We also propose an end-to-end deep learning architecture for fusion of RGB images and LiDAR laser scan data for odometry application. The proposed method contains a convolutional encoder, a compressed representation and a recurrent neural network. Besides feature extraction and outlier rejection, the convolutional encoder produces a compressed representation, which is used to visualise the network's learning process and to pass useful sequential information. The recurrent neural network uses this compressed sequential data to learn the relationship between consecutive time steps. We use the Loughborough autonomous vehicle (LboroAV2) and the Karlsruhe Institute of Technology and Toyota Institute (KITTI) Visual Odometry (VO) datasets to experiment and evaluate our results. In addition to visualising the network's learning process, our approach provides superior results compared to other similar methods. The code for the proposed architecture is released in GitHub and accessible publicly.
引用
收藏
页数:17
相关论文
共 48 条
[1]  
[Anonymous], 2007, 6 IEEE ACM INT S MIX
[2]   A method for ego-motion estimation in micro-hovering platforms flying in very cluttered environments [J].
Briod, Adrien ;
Zufferey, Jean-Christophe ;
Floreano, Dario .
AUTONOMOUS ROBOTS, 2016, 40 (05) :789-803
[3]   Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey [J].
Buhrmester, Vanessa ;
Muench, David ;
Arens, Michael .
MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2021, 3 (04) :966-989
[4]  
Cho YG, 2020, IEEE INT CONF ROBOT, P2145, DOI [10.1109/ICRA40945.2020.9197366, 10.1109/icra40945.2020.9197366]
[5]   LS-VO: Learning Dense Optical Subspace for Robust Visual Odometry Estimation [J].
Costante, Gabriele ;
Ciarfuglia, Thomas Alessandro .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (03) :1735-1742
[6]   Exploring Representation Learning With CNNs for Frame-to-Frame Ego-Motion Estimation [J].
Costante, Gabriele ;
Mancini, Michele ;
Valigi, Paolo ;
Ciarfuglia, Thomas A. .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2016, 1 (01) :18-25
[7]   FlowNet: Learning Optical Flow with Convolutional Networks [J].
Dosovitskiy, Alexey ;
Fischer, Philipp ;
Ilg, Eddy ;
Haeusser, Philip ;
Hazirbas, Caner ;
Golkov, Vladimir ;
van der Smagt, Patrick ;
Cremers, Daniel ;
Brox, Thomas .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2758-2766
[8]   Deep Learning for Visual SLAM in Transportation Robotics: A review [J].
Duan, Chao ;
Junginger, Steffen ;
Huang, Jiahao ;
Jin, Kairong ;
Thurow, Kerstin .
TRANSPORTATION SAFETY AND ENVIRONMENT, 2019, 1 (03) :177-184
[9]   LSD-SLAM: Large-Scale Direct Monocular SLAM [J].
Engel, Jakob ;
Schoeps, Thomas ;
Cremers, Daniel .
COMPUTER VISION - ECCV 2014, PT II, 2014, 8690 :834-849
[10]   Vision meets robotics: The KITTI dataset [J].
Geiger, A. ;
Lenz, P. ;
Stiller, C. ;
Urtasun, R. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1231-1237