3D object detection for autonomous driving: Methods, models, sensors, data, and challenges

被引:0
作者
Ghasemieh A. [1 ]
Kashef R. [1 ]
机构
[1] Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto
来源
Transportation Engineering | 2022年 / 8卷
关键词
3D object detection; Autonomous vehicles; LiDAR; Point cloud; Sensors; Stereo images;
D O I
10.1016/j.treng.2022.100115
中图分类号
学科分类号
摘要
Detection of the surrounding objects of a vehicle is the most crucial step in autonomous driving. Failure to identify those objects correctly in a timely manner can cause irreparable damage, impacting our safety and society. Several studies have been introduced to identify these objects in the two-dimensional (2D) and three-dimensional (3D) vector space. The 2D object detection method has achieved remarkable success; however, in the last few years, detecting objects in 3D have received more remarkable adoption. 3D object recognition has several advantages over 2D detection methods, as more accurate information about the environment is obtained for better detection. For example, the depth of the images is not considered in the 2D detection, which reduces the detection accuracy. Despite considerable efforts in 3D object detection, it has not yet reached the stage of maturity. Therefore, in this paper, we aim at providing a comprehensive overview of the state-of-the-art 3D object detection methods, with a focus on 1) identifying advantages and limitations, 2) revelling a novel categorization of the literature, 3) outlying the various training procedures, 4) highlighting the research gap in the existing methods and 5) building a road map for future directions. © 2022
引用
收藏
相关论文
共 77 条
[1]  
J3016 - taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles, SAE Int., (2018)
[2]  
The 6 levels of vehicle autonomy explained | synopsys automotive, Synpsys.Com, (2019)
[3]  
Rosique F., Navarro P.J., Fernandez C., Padilla A., A systematic review of perception system and simulators for autonomous vehicles research, Sensors (Switzerland), 19, 3, (2019)
[4]  
Ranft B., Stiller C., The role of machine vision for intelligent vehicles, IEEE Trans. Intell. Veh., 1, 1, pp. 8-19, (2016)
[5]  
Weber M., Wolf P., Zollner J.M., DeepTLR: a single deep convolutional network for detection and classification of traffic lights, IEEE Intelligent Vehicles Symposium, Proceedings, pp. 342-348, (2016)
[6]  
“Sony commercializes the industry's first*1 high-sensitivity CMOS image sensor for automotive cameras, delivering simultaneous LED flicker mitigation and high-quality HDR shooting, News Rel., (2017)
[7]  
Sivaraman S., Trivedi M.M., Looking at vehicles on the road: a survey of vision-based vehicle detection, tracking, and behavior analysis, IEEE Trans. Intell. Transp. Syst., 14, 4, pp. 1773-1795, (2013)
[8]  
(2018)
[9]  
Hall M.
[10]  
Geiger A., Lenz P., Stiller C., Urtasun R., The KITTI vision benchmark suite, KITTI Vis. Benchmark Suite, pp. 1-13, (2013)