Fast Object Detection and Tracking in Laser Data for Autonomous Driving

被引:0
作者
Ye Y. [1 ]
Li B. [1 ,2 ]
Fu L. [1 ]
机构
[1] State Key Laboratoy of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan
[2] Engineering Research Center for Spatio-Temporal Data Smart Acquisition and Application, Ministry of Education, Wuhan University, Wuhan
来源
Wuhan Daxue Xuebao (Xinxi Kexue Ban)/Geomatics and Information Science of Wuhan University | 2019年 / 44卷 / 01期
基金
中国国家自然科学基金;
关键词
Autonomous driving; Kalman filter; Moving object tracking; Point cloud; Urban environment;
D O I
10.13203/j.whugis20170146
中图分类号
学科分类号
摘要
A fast algorithm to detecting and tracking multiple objects for an urban driving environment in multi-layer laser data is proposed in this paper. Since situational awareness is crucial for autonomous driving in complicate urban environments and challenging in 3D city perception. Object detection and tracking with cameras or laser has become a popular research topic. Compared with camera, multi-layer laser data is more suitable to estimate 3D model of object and predict motion. So 3D LiDAR is widely used in autonomous driving system. Model-based object tracking framework used in this paper relies on Kalman filter. We extract segmentation in each layer before clustering, which accelerates our detection step. Considering sub-segmentation and super-segmentation happens from time to time in object detection using sparse laser data, we associate the tracking history information with segmentation processing in a fast way. The proposed algorithm in this paper has been applied to the multi-layer laser set up on our autonomous driving vehicle. Experiments demonstrate the applicability and efficiency of our proposed algorithm under urban driving environment. On average, single frame processing takes 58 ms. © 2019, Research and Development Office of Wuhan University. All right reserved.
引用
收藏
页码:139 / 144and152
相关论文
共 25 条
[1]  
Geiger A., Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite, IEEE Conference on Computer Vision and Pattern Recognition, (2012)
[2]  
Montemerlo M., Becker J., Bhat S., Et al., Junior: The Stanford Entry in the Urban Challenge, Journal of Field Robotics, 25, 9, pp. 569-597, (2008)
[3]  
Geiger A., Lauer M., Wojek C., Et al., 3D Traffic Scene Understanding from Movable Platforms, IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 5, pp. 1012-1025, (2014)
[4]  
Zhang L., Li Y., Nevatia R., Global Data Association for Multi-object Tracking Using Network Flows, IEEE Conference on Computer Vision and Pattern Recognition, (2008)
[5]  
Asvadi A., Peixoto P., Nunes U., Detection and Tracking of Moving Objects Using 2.5D Motion Grids, IEEE International Conference on Intelligent Transportation Systems, (2015)
[6]  
Vatavu A., Danescu R., Nedevschi S., Stereovision-Based Multiple Object Tracking in Traffic Scenarios Using Free-Form Obstacle Delimiters and Particle Filters, IEEE Transactions on Intelligent Transportation Systems, 16, 1, pp. 498-511, (2015)
[7]  
Osep A., Hermans A., Engelmann F., Et al., Multi-scale Object Candidates for Generic Object Tracking in Street Scenes, IEEE International Conference on Robotics and Automation, (2016)
[8]  
Durrant-Whyte H., Roy N., Abbeel P., Tracking-Based Semi-Supervised Learning, International Journal of Robotics Research, 31, 7, pp. 804-818, (2012)
[9]  
Kaestner R., Maye J., Pilat Y., Et al., Generative Object Detection and Tracking in 3D Range Data, IEEE International Conference on Robotics and Automation, (2012)
[10]  
Pirsiavash H., Ramanan D., Fowlkes C.C., Globally-Optimal Greedy Algorithms for Tracking a Variable Number of Objects, IEEE Conference on Computer Vision and Pattern Recognition, (2011)