Enhanced Perception for Autonomous Driving Using Semantic and Geometric Data Fusion

被引:15
作者
Florea, Horatiu [1 ]
Petrovai, Andra [1 ]
Giosan, Ion [1 ]
Oniga, Florin [1 ]
Varga, Robert [1 ]
Nedevschi, Sergiu [1 ]
机构
[1] Tech Univ Cluj Napoca, Image Proc & Pattern Recognit Res Ctr, Comp Sci Dept, Cluj Napoca 400114, Romania
关键词
autonomous driving; environment perception; low-level geometry and semantic fusion; semantic and instance segmentation; deep learning; 3D object detection; TRACKING;
D O I
10.3390/s22135061
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Environment perception remains one of the key tasks in autonomous driving for which solutions have yet to reach maturity. Multi-modal approaches benefit from the complementary physical properties specific to each sensor technology used, boosting overall performance. The added complexity brought on by data fusion processes is not trivial to solve, with design decisions heavily influencing the balance between quality and latency of the results. In this paper we present our novel real-time, 360 degrees enhanced perception component based on low-level fusion between geometry provided by the LiDAR-based 3D point clouds and semantic scene information obtained from multiple RGB cameras, of multiple types. This multi-modal, multi-sensor scheme enables better range coverage, improved detection and classification quality with increased robustness. Semantic, instance and panoptic segmentations of 2D data are computed using efficient deep-learning-based algorithms, while 3D point clouds are segmented using a fast, traditional voxel-based solution. Finally, the fusion obtained through point-to-image projection yields a semantically enhanced 3D point cloud that allows enhanced perception through 3D detection refinement and 3D object classification. The planning and control systems of the vehicle receives the individual sensors' perception together with the enhanced one, as well as the semantically enhanced 3D points. The developed perception solutions are successfully integrated onto an autonomous vehicle software stack, as part of the UP-Drive project.
引用
收藏
页数:22
相关论文
共 54 条
[41]  
Oniga F, 2018, INT C INTELL COMP CO, P209, DOI 10.1109/ICCP.2018.8516642
[42]  
Peng SD, 2020, PROC CVPR IEEE, P8530, DOI 10.1109/CVPR42600.2020.00856
[43]   Semantic Cameras for 360-Degree Environment Perception in Automated Urban Driving [J].
Petrovai, Andra ;
Nedevschi, Sergiu .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (10) :17271-17283
[44]   No Blind Spots: Full-Surround Multi-Object Tracking for Autonomous Vehicles Using Cameras and LiDARs [J].
Rangesh, Akshay ;
Trivedi, Mohan Manubhai .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2019, 4 (04) :588-599
[45]   Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks [J].
Ren, Shaoqing ;
He, Kaiming ;
Girshick, Ross ;
Sun, Jian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (06) :1137-1149
[46]  
Rieken J., 2020, ARXIV
[47]   ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation [J].
Romera, Eduardo ;
Alvarez, Jose M. ;
Bergasa, Luis M. ;
Arroyo, Roberto .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2018, 19 (01) :263-272
[48]  
Schauer Johannes, 2018, IEEE Robotics and Automation Letters, V3, P1679, DOI 10.1109/LRA.2018.2801797
[49]   MOZARD: Multi-Modal Localization for Autonomous Vehicles in Urban Outdoor Environments [J].
Schaupp, Lukas ;
Pfreundschuh, Patrick ;
Buerki, Mathias ;
Cadena, Cesar ;
Siegwart, Roland ;
Nieto, Juan .
2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, :4828-4833
[50]   Grid-Based Environment Estimation Using Evidential Mapping and Particle Tracking [J].
Steyer, Sascha ;
Tanzmeister, Georg ;
Wollherr, Dirk .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2018, 3 (03) :384-396