Moving Object Detection and Tracking Based on Interaction of Static Obstacle Map and Geometric Model-Free Approach for Urban Autonomous Driving

被引:36
作者
Lee, Hojoon [1 ]
Yoon, Jeongsik [1 ]
Jeong, Yonghwan [1 ]
Yi, Kyongsu [1 ]
机构
[1] Seoul Natl Univ, Dept Mech Engn, Seoul 08826, South Korea
基金
新加坡国家研究基金会;
关键词
Three-dimensional displays; Radar tracking; Laser radar; Real-time systems; Estimation; Target tracking; Autonomous vehicles; DATMO; sparse point cloud; model free tracking; LiDAR; VEHICLE DETECTION; VISION; LIDAR;
D O I
10.1109/TITS.2020.2981938
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Detection and tracking of moving objects (DATMO) in an urban environment using Light Detection and Ranging (LiDAR) is a major challenge for autonomous vehicles due to sparse point cloud, multiple moving directions, various traffic participants, and computational load. To address the complexity of this issue, this study presents a novel model-free approach for DATMO using 2D LiDAR implemented on autonomous vehicles. The approach has been used to classify moving points in the point cloud using the predicted Static Obstacle Map (SOM) generated via interaction between Geometric Model-Free Approach (GMFA) and SOM, and estimates the state of each moving object via GMFA. The motion of each point represented by the state of moving objects updates the SOM. The interaction between GMFA and SOM estimates the correspondence between consecutive point clouds in real-time. The proposed approach has been evaluated via RT range and labeled dataset. The accuracy of estimation of the yaw angle and the velocity of a moving vehicle has been quantitatively evaluated using the RT-range. The performance is significantly improved compared with the geometric model-based tracking (MBT). The estimation of the yaw angle, which has a significant effect on the cut-in/cut-out intention of the target vehicle, is shown to be remarkably improved. Based on the evaluation of the labeled dataset, false-positive and false-negative features are suppressed more than MBT.
引用
收藏
页码:3275 / 3284
页数:10
相关论文
共 30 条
[1]  
[Anonymous], 2014, COOPERATIVE SYSTEMS
[2]   Instant Object Detection in Lidar Point Clouds [J].
Borcs, Attila ;
Nagy, Balazs ;
Benedek, Csaba .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2017, 14 (07) :992-996
[3]  
Bosse M, 2009, IEEE INT CONF ROBOT, P4244
[4]   Multi-View 3D Object Detection Network for Autonomous Driving [J].
Chen, Xiaozhi ;
Ma, Huimin ;
Wan, Ji ;
Li, Bo ;
Xia, Tian .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6526-6534
[5]  
Cho H, 2014, IEEE INT CONF ROBOT, P1836, DOI 10.1109/ICRA.2014.6907100
[6]  
Ferri F, 2015, IEEE INT C INT ROBOT, P5694, DOI 10.1109/IROS.2015.7354185
[7]   Object Classification Using CNN-Based Fusion of Vision and LIDAR in Autonomous Vehicle Environment [J].
Gao, Hongbo ;
Cheng, Bo ;
Wang, Jianqiang ;
Li, Keqiang ;
Zhao, Jianhui ;
Li, Deyi .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2018, 14 (09) :4224-4231
[8]  
Giese T, 2017, INT RADAR SYMP PROC
[9]  
Hess W, 2016, IEEE INT CONF ROBOT, P1271, DOI 10.1109/ICRA.2016.7487258
[10]  
Hutchison M. C., 2010, Google Patents, Patent No. [7 821 422, 7821422]