3D FEATURE POINT EXTRACTION FROM LIDAR DATA USING A NEURAL NETWORK

被引:9
作者
Feng, Y. [1 ]
Schlichting, A. [1 ]
Brenner, C. [1 ]
机构
[1] Leibniz Univ Hannover, Inst Cartog & Geoinformat, Hannover, Germany
来源
XXIII ISPRS CONGRESS, COMMISSION I | 2016年 / 41卷 / B1期
关键词
3D feature points extraction; Mobile Mapping System; LiDAR; Neural network;
D O I
10.5194/isprsarchives-XLI-B1-563-2016
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.
引用
收藏
页码:563 / 569
页数:7
相关论文
共 23 条
[1]   SURF: Speeded up robust features [J].
Bay, Herbert ;
Tuytelaars, Tinne ;
Van Gool, Luc .
COMPUTER VISION - ECCV 2006 , PT 1, PROCEEDINGS, 2006, 3951 :404-417
[2]   A METHOD FOR REGISTRATION OF 3-D SHAPES [J].
BESL, PJ ;
MCKAY, ND .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1992, 14 (02) :239-256
[3]  
Bishop CM., 2006, PATTERN RECOGN, V128, P225, DOI DOI 10.1109/JSEN.2002.800681
[4]  
Brenner C., 2012, ADV SPATIAL DATA HAN, P169
[5]  
Brenner C, 2010, INT ARCH PHOTOGRAMM, V38, P139
[6]  
CloudCompare, 2015, CLOUDCOMPARE OP SOUR
[7]  
Harris C., 1988, ALVEY VISION C, P147, DOI [10.5244/C.2.23, DOI 10.5244/C.2.23]
[8]   Backpropagation Applied to Handwritten Zip Code Recognition [J].
LeCun, Y. ;
Boser, B. ;
Denker, J. S. ;
Henderson, D. ;
Howard, R. E. ;
Hubbard, W. ;
Jackel, L. D. .
NEURAL COMPUTATION, 1989, 1 (04) :541-551
[9]   Distinctive image features from scale-invariant keypoints [J].
Lowe, DG .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2004, 60 (02) :91-110
[10]   On the Repeatability and Quality of Keypoints for Local Feature-based 3D Object Retrieval from Cluttered Scenes [J].
Mian, A. ;
Bennamoun, M. ;
Owens, R. .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 89 (2-3) :348-361