Efficient Online Segmentation for Sparse 3D Laser Scans

被引:118
作者
Bogoslavskyi, Igor [1 ]
Stachniss, Cyrill [1 ]
机构
[1] Univ Bonn, Inst Geodesy & Geoinformat, Nussallee 15, D-53115 Bonn, Germany
来源
PFG-JOURNAL OF PHOTOGRAMMETRY REMOTE SENSING AND GEOINFORMATION SCIENCE | 2017年 / 85卷 / 01期
关键词
Segmentation; 3D laser; Online; Range image; Sparse data; Point cloud;
D O I
10.1007/s41064-016-0003-y
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individual objects is the first processing step before a further analysis is performed. In this paper, we present an effective method that first removes the ground from the scan and then segments the 3D data in a range image representation into different objects. A key focus of our work is a fast execution with several hundred Hertz. Our implementation has small computational demands so that it can run online on most mobile systems. We explicitly avoid the computation of the 3D point cloud and operate directly on a 2.5D range image, which enables a fast segmentation for each 3D scan. This approach can furthermore handle sparse 3D data well, which is important for scanners such as the new Velodyne VLP-16 scanner. We implemented our approach in C++ and ROS, thoroughly tested it using different 3D scanners, and will release the source code of our implementation. Our method can operate at frame rates that are substantially higher than those of the sensors while using only a single core of a mobile CPU and producing high-quality segmentation results.
引用
收藏
页码:41 / 52
页数:12
相关论文
共 21 条
[1]  
Abdullah S., 2014, ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci, VXL-3, P1, DOI [10.5194/isprsarchives-XL-3-1-2014, DOI 10.5194/ISPRSARCHIVES-XL-3-1-2014]
[2]  
[Anonymous], 2008, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences
[3]  
Bansal Mayank, 2009, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, P31, DOI 10.1109/ICCVW.2009.5457720
[4]  
Behley J, 2013, IEEE INT C INT ROBOT, P4195, DOI 10.1109/IROS.2013.6696957
[5]  
Bogoslavskyi I, 2016, INT C INT ROB SYST
[6]  
Cabaret L, 2014, P C DES ARCH SIGN IM, P1, DOI DOI 10.1109/SIPS.2014.6986069
[7]  
Choe Y, 2012, 2012 9TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS AND AMBIENT INTELLIGENCE (URAL), P38, DOI 10.1109/URAI.2012.6462925
[8]  
Dewan A, 2016, IEEE INT CONF ROBOT, P4508, DOI 10.1109/ICRA.2016.7487649
[9]  
Douillard B, 2011, IEEE INT CONF ROBOT
[10]  
Douillard B., 2014, Adv. Robotics, P585, DOI DOI 10.1109/IRANIANMVIP.2010.5941134