Sensor Fusion Method for Object Detection and Distance Estimation in Assisted Driving Applications

被引:4
作者
Favelli, Stefano [1 ,2 ]
Xie, Meng [2 ]
Tonoli, Andrea [1 ,2 ]
机构
[1] Politecn Torino, Ctr Automot Res & Sustainable Mobil CARS PoliTO, I-10129 Turin, Italy
[2] Politecn Torino, Dipartimento Ingn Meccan & Aerosp DIMEAS, I-10129 Turin, Italy
关键词
ADAS; environment perception; object detection; sensor fusion; camera; LiDAR; ROS;
D O I
10.3390/s24247895
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The fusion of multiple sensors' data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between a vehicle equipped with sensors and different road objects on its path using the fusion of data from cameras, radars, and LiDARs. The target application is an Advanced Driving Assistance System (ADAS) that benefits from the integration of the sensors' attributes to plan the vehicle's speed according to real-time road occupation and distance from obstacles. Based on geometrical projection, a low-level sensor fusion approach is proposed to map 3D point clouds into 2D camera images. The fusion information is used to estimate the distance of objects detected and labeled by a Yolov7 detector. The open-source pipeline implemented in ROS consists of a sensors' calibration method, a Yolov7 detector, 3D point cloud downsampling and clustering, and finally a 3D-to-2D transformation between the reference frames. The goal of the pipeline is to perform data association and estimate the distance of the identified road objects. The accuracy and performance are evaluated in real-world urban scenarios with commercial hardware. The pipeline running on an embedded Nvidia Jetson AGX achieves good accuracy on object identification and distance estimation, running at 5 Hz. The proposed framework introduces a flexible and resource-efficient method for data association from common automotive sensors and proves to be a promising solution for enabling effective environment perception ability for assisted driving.
引用
收藏
页数:21
相关论文
共 33 条
[1]   Fast Euclidean Cluster Extraction Using GPUs [J].
Anh Nguyen ;
Cano, Abraham Monrroy ;
Edahiro, Masato ;
Kato, Shinpei .
JOURNAL OF ROBOTICS AND MECHATRONICS, 2020, 32 (03) :548-560
[2]  
Bradski G, 2000, DR DOBBS J, V25, P120
[3]   SIMPLE PINHOLE CAMERA CALIBRATION [J].
DAWSONHOWE, KM ;
VERNON, D .
INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 1994, 5 (01) :1-6
[4]  
docs.opencv.org, Camera Calibration and 3D Reconstruction-OpenCV
[5]   Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges [J].
Feng, Di ;
Haase-Schutz, Christian ;
Rosenbaum, Lars ;
Hertlein, Heinz ;
Glaser, Claudius ;
Timm, Fabian ;
Wiesbeck, Werner ;
Dietmayer, Klaus .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (03) :1341-1360
[6]   RANDOM SAMPLE CONSENSUS - A PARADIGM FOR MODEL-FITTING WITH APPLICATIONS TO IMAGE-ANALYSIS AND AUTOMATED CARTOGRAPHY [J].
FISCHLER, MA ;
BOLLES, RC .
COMMUNICATIONS OF THE ACM, 1981, 24 (06) :381-395
[7]  
github.com, Yolomark
[8]  
Goberville N., 2020, SAE International Journal of Advances and Current Practices in Mobility, V2, P2428
[9]  
Gu YL, 2011, IEEE INT VEH SYM, P1054, DOI 10.1109/IVS.2011.5940513
[10]   KD-Tree-Based Euclidean Clustering for Tomographic SAR Point Cloud Extraction and Segmentation [J].
Guo, Ziye ;
Liu, Hui ;
Shi, Hongyin ;
Li, Fang ;
Guo, Xinyu ;
Cheng, Bihui .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20