Velocity Estimation from LiDAR Sensors Motion Distortion Effect

被引:7
作者
Haas, Lukas [1 ,2 ]
Haider, Arsalan [1 ,2 ]
Kastner, Ludwig [1 ]
Zeh, Thomas [1 ]
Poguntke, Tim [1 ]
Kuba, Matthias [3 ]
Schardt, Michael [4 ]
Jakobi, Martin [2 ]
Koch, Alexander W. [3 ]
机构
[1] Kempten Univ Appl Sci, IFM Inst Driver Assistance Syst & Connected Mobil, Junkerstrasse 1A, D-87734 Benningen, Germany
[2] Tech Univ Munich, Inst Measurement Syst & Sensor Technol, Theresienstr 90, D-80333 Munich, Germany
[3] Kempten Univ Appl Sci, Fac Elect Engn, Bahnhofstr 61, D-87435 Kempten, Germany
[4] Blickfeld GmbH, Barthstr 12, D-80339 Munich, Germany
关键词
LiDAR sensor; deep learning; motion distortion effect; point cloud; advanced driver assistance systems; highly automated driving; velocity estimation;
D O I
10.3390/s23239426
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Many modern automated vehicle sensor systems use light detection and ranging (LiDAR) sensors. The prevailing technology is scanning LiDAR, where a collimated laser beam illuminates objects sequentially point-by-point to capture 3D range data. In current systems, the point clouds from the LiDAR sensors are mainly used for object detection. To estimate the velocity of an object of interest (OoI) in the point cloud, the tracking of the object or sensor data fusion is needed. Scanning LiDAR sensors show the motion distortion effect, which occurs when objects have a relative velocity to the sensor. Often, this effect is filtered, by using sensor data fusion, to use an undistorted point cloud for object detection. In this study, we developed a method using an artificial neural network to estimate an object's velocity and direction of motion in the sensor's field of view (FoV) based on the motion distortion effect without any sensor data fusion. This network was trained and evaluated with a synthetic dataset featuring the motion distortion effect. With the method presented in this paper, one can estimate the velocity and direction of an OoI that moves independently from the sensor from a single point cloud using only one single sensor. The method achieves a root mean squared error (RMSE) of 0.1187 m s-1 and a two-sigma confidence interval of [-0.0008 m s-1, 0.0017 m s-1] for the axis-wise estimation of an object's relative velocity, and an RMSE of 0.0815 m s-1 and a two-sigma confidence interval of [0.0138 m s-1, 0.0170 m s-1] for the estimation of the resultant velocity. The extracted velocity information (4D-LiDAR) is available for motion prediction and object tracking and can lead to more reliable velocity data due to more redundancy for sensor data fusion.
引用
收藏
页数:16
相关论文
共 35 条
[1]  
[Anonymous], 2022, Blickfeld GmbH Cube 1 v2.1, Datasheet
[2]  
[Anonymous], LiPeZ-Entwicklung Neuartiger Verfahren der Objekterkennung und -Klassifizierung aus Punktwolkedaten von LiDAR Sensoren zur Erkennung und Zahlung von Personen in Menschenmengen
[3]  
Azim A, 2012, 2012 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), P802, DOI 10.1109/IVS.2012.6232303
[4]  
BALLARD P, 1994, IEEE INT CONF ROBOT, P2242, DOI 10.1109/ROBOT.1994.350952
[6]  
Blickfeld GmbH, Technologie
[7]   Probabilistic 3D Multi-Modal, Multi-Object Tracking for Autonomous Driving [J].
Chiu, Hsu-kuang ;
Lie, Jie ;
Ambrus, Rares ;
Bohg, Jeannette .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :14227-14233
[8]  
Eric Voigt E., 2020, LiDAR in Anwendung
[9]   3D-SiamRPN: An End-to-End Learning Method for Real-Time 3D Single Object Tracking Using Raw Point Cloud [J].
Fang, Zheng ;
Zhou, Sifan ;
Cui, Yubo ;
Scherer, Sebastian .
IEEE SENSORS JOURNAL, 2021, 21 (04) :4995-5011
[10]   Leveraging Shape Completion for 3D Siamese Tracking [J].
Giancola, Silvio ;
Zarzar, Jesus ;
Ghanem, Bernard .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :1359-1368