A Fast Spatial Clustering Method for Sparse LiDAR Point Clouds Using GPU Programming

被引:7
作者
Tian, Yifei [1 ,2 ]
Song, Wei [1 ,3 ]
Chen, Long [2 ]
Sung, Yunsick [4 ]
Kwak, Jeonghoon [4 ]
Sun, Su [5 ]
机构
[1] North China Univ Technol, Sch Informat Sci & Technol, Beijing 100144, Peoples R China
[2] Univ Macau, Dept Comp & Informat Sci, Macau 999078, Peoples R China
[3] Beijing Key Lab Urban Intelligent Traff Control T, Beijing 100144, Peoples R China
[4] Dongguk Univ, Dept Multimedia Engn, Seoul 04620, South Korea
[5] Purdue Univ, Dept Comp & Informat Technol, W Lafayette, IN 47907 USA
关键词
3D spatial clustering; connected component labeling; LiDAR; GPU programming; 3D OBJECT RECOGNITION; SEGMENTATION; VEHICLES; CLASSIFICATION; EXTRACTION;
D O I
10.3390/s20082309
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Fast and accurate obstacle detection is essential for accurate perception of mobile vehicles' environment. Because point clouds sensed by light detection and ranging (LiDAR) sensors are sparse and unstructured, traditional obstacle clustering on raw point clouds are inaccurate and time consuming. Thus, to achieve fast obstacle clustering in an unknown terrain, this paper proposes an elevation-reference connected component labeling (ER-CCL) algorithm using graphic processing unit (GPU) programing. LiDAR points are first projected onto a rasterized x-z plane so that sparse points are mapped into a series of regularly arranged small cells. Based on the height distribution of the LiDAR point, the ground cells are filtered out and a flag map is generated. Next, the ER-CCL algorithm is implemented on the label map generated from the flag map to mark individual clusters with unique labels. Finally, obstacle labeling results are inverse transformed from the x-z plane to 3D points to provide clustering results. For real-time 3D point cloud clustering, ER-CCL is accelerated by running it in parallel with the aid of GPU programming technology.
引用
收藏
页数:20
相关论文
共 33 条
[1]  
[Anonymous], P ISPRS ANN PHOTOGRA
[2]  
[Anonymous], 2014, Adv. Robotics, DOI DOI 10.1109/IRANIANMVIP.2010.5941134
[3]  
[Anonymous], 2018, PATTERN RECOGN LETT, DOI DOI 10.1016/j.patrec.2017.09.038
[4]   Motion-based object segmentation using hysteresis and bidirectional inter-frame change detection in sequences with moving camera [J].
Arvanitidou, Marina Georgia ;
Tok, Michael ;
Glantz, Alexander ;
Krutz, Andreas ;
Sikora, Thomas .
SIGNAL PROCESSING-IMAGE COMMUNICATION, 2013, 28 (10) :1420-1434
[5]   3D Lidar-based static and moving obstacle detection in driving environments: An approach based on voxels and multi-region ground planes [J].
Asvadi, Alireza ;
Premebida, Cristiano ;
Peixoto, Paulo ;
Nunes, Urbano .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2016, 83 :299-311
[6]   Threshold-free object and ground point separation in LIDAR data [J].
Bartels, Marc ;
Wei, Hong .
PATTERN RECOGNITION LETTERS, 2010, 31 (10) :1089-1099
[7]   SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks [J].
Boulch, Alexandre ;
Guerry, Yids ;
Le Saux, Bertrand ;
Audebert, Nicolas .
COMPUTERS & GRAPHICS-UK, 2018, 71 :189-198
[8]   University of Michigan North Campus long-term vision and lidar dataset [J].
Carlevaris-Bianco, Nicholas ;
Ushani, Arash K. ;
Eustice, Ryan M. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2016, 35 (09) :1023-1035
[9]   Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud [J].
Cho, Seoungjae ;
Kim, Jonghyun ;
Ikram, Warda ;
Cho, Kyungeun ;
Jeong, Young-Sik ;
Um, Kyhyun ;
Sim, Sungdae .
SCIENTIFIC WORLD JOURNAL, 2014,
[10]   Convergent application for trace elimination of dynamic objects from accumulated lidar point clouds [J].
Chu, Phuong Minh ;
Cho, Seoungjae ;
Sim, Sungdae ;
Kwak, Kiho ;
Cho, Kyungeun .
MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (22) :29991-30009