Fast Clustering for Cooperative Perception Based on LiDAR Adaptive Dynamic Grid Encoding

被引:3
作者
Kuang, Xinkai [1 ,2 ]
Zhu, Hui [1 ]
Yu, Biao [1 ]
Li, Bichun [1 ]
机构
[1] Chinese Acad Sci, Hefei Inst Phys Sci, Hefei 230031, Peoples R China
[2] Univ Sci & Technol China, Sci Isl Branch, Hefei 230031, Peoples R China
关键词
Multi-vehicle; Cooperative perception; LiDAR; Cluster; Obstacle detection;
D O I
10.1007/s12559-023-10211-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study introduces a strategy inspired by cooperative behavior in nature to enhance information sharing among autonomous vehicles (AVs), advancing intelligent transportation systems. However, in the context of multiple light detection and ranging (LiDAR)-equipped vehicles cooperating, the generated point cloud data can obstruct real-time environment perception. This research assumes real-time, lossless data transmission, and accurate and reliable pose information sharing between cooperative vehicles. Based on human-inspired principles and computer imaging techniques, a method was proposed to encode dynamic grids for fusion LiDAR point cloud data, contingent upon inter-vehicle distances. Each grid cell corresponds to an image pixel, creating smaller cells for dense point clouds and larger cells for sparse point clouds. This maintains an approximately equal number of point clouds per cell. Additionally, a ground segmentation approach is developed, based on density and elevation differences of adjacent grids to retain significant obstacle points. A grid density-based adjacent clustering approach was proposed, which effectively classified the connected grid cells containing the obstacle points. Experiments using the robot operating system on a standard computer with public data show that the perception processing period for six cooperative vehicles is merely 43.217 ms. This demonstrates the efficacy of our method in handling large volumes of LiDAR point cloud data. Comparative analysis with three alternative methods confirmed the superior accuracy and recall of our clustering approach. This underscores the robustness of our biologically inspired methodology for the design of cooperative perception, thereby promoting efficient and safe vehicle navigation.
引用
收藏
页码:546 / 565
页数:20
相关论文
共 43 条
[1]  
Abdel-Qader Y., 2021, SENSORS, V21, P2114
[2]   Visual-LiDAR SLAM Based on Unsupervised Multi-channel Deep Neural Networks [J].
An, Yi ;
Shi, Jin ;
Gu, Dongbing ;
Liu, Qiang .
COGNITIVE COMPUTATION, 2022, 14 (04) :1496-1508
[3]  
Chen, 2021, IEEE T VEH TECHNOL, V70, P8833
[4]  
Chen Q, 2022, J ADV TRANSPORT
[5]   Cooperative Perception Technology of Autonomous Driving in the Internet of Vehicles Environment: A Review [J].
Cui, Guangzhen ;
Zhang, Weili ;
Xiao, Yanqiu ;
Yao, Lei ;
Fang, Zhanpeng .
SENSORS, 2022, 22 (15)
[6]  
Daniel K, 2020, IEEE T INTELL TRANSP, V21
[7]  
Duan XT, 2021, CHINA COMMUN, V18, P1, DOI 10.23919/JCC.2021.07.001
[8]  
Ester Martin, 1996, P 2 INT C KNOWLEDGE, V96, P226, DOI DOI 10.5555/3001460.3001507
[9]   PROCEDURES FOR DETECTING OUTLYING OBSERVATIONS IN SAMPLES [J].
GRUBBS, FE .
TECHNOMETRICS, 1969, 11 (01) :1-&
[10]  
Guo H., 2022, REMOTE SENS LETT, V13, P382