Unifying Obstacle Detection, Recognition, and Fusion Based on the Polarization Color Stereo Camera and LiDAR for the ADAS

被引:13
作者
Long, Ningbo [1 ]
Yan, Han [2 ]
Wang, Liqiang [1 ,3 ]
Li, Haifeng [1 ,3 ]
Yang, Qing [1 ,3 ]
机构
[1] Zhejiang Lab, Res Ctr Humanoid Sensing, Hangzhou 311100, Peoples R China
[2] Beijing Inst Control Engn, Sci & Technol Space Intelligent Control Lab, Beijing 100094, Peoples R China
[3] Zhejiang Univ, Coll Opt Sci & Engn, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金;
关键词
polarization-color-depth; stereo camera; non-repetitive scanning LiDAR; sensor fusion; OBJECT DETECTION;
D O I
10.3390/s22072453
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The perception module plays an important role in vehicles equipped with advanced driver-assistance systems (ADAS). This paper presents a multi-sensor data fusion system based on the polarization color stereo camera and the forward-looking light detection and ranging (LiDAR), which achieves the multiple target detection, recognition, and data fusion. The You Only Look Once v4 (YOLOv4) network is utilized to achieve object detection and recognition on the color images. The depth images are obtained from the rectified left and right images based on the principle of the epipolar constraints, then the obstacles are detected from the depth images using the MeanShift algorithm. The pixel-level polarization images are extracted from the raw polarization-grey images, then the water hazards are detected successfully. The PointPillars network is employed to detect the objects from the point cloud. The calibration and synchronization between the sensors are accomplished. The experiment results show that the data fusion enriches the detection results, provides high-dimensional perceptual information and extends the effective detection range. Meanwhile, the detection results are stable under diverse range and illumination conditions.
引用
收藏
页数:20
相关论文
共 38 条
[1]   Track-to-Track Fusion With Asynchronous Sensors Using Information Matrix Fusion for Surround Environment Perception [J].
Aeberhard, Michael ;
Schlichthaerle, Stefan ;
Kaempchen, Nico ;
Bertram, Torsten .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2012, 13 (04) :1717-1726
[2]  
[Anonymous], Others: Ceres Solver
[3]   Self-driving cars: A survey [J].
Badue, Claudine ;
Guidolini, Ranik ;
Carneiro, Raphael Vivacqua ;
Azevedo, Pedro ;
Cardoso, Vinicius B. ;
Forechi, Avelino ;
Jesus, Luan ;
Berriel, Rodrigo ;
Paixao, Thiago M. ;
Mutz, Filipe ;
Veronese, Lucas de Paula ;
Oliveira-Santos, Thiago ;
De Souza, Alberto F. .
EXPERT SYSTEMS WITH APPLICATIONS, 2021, 165
[4]  
Bochkovskiy A., 2004, 200410934 ARXIV
[5]   LIDAR-camera fusion for road detection using fully convolutional neural networks [J].
Caltagirone, Luca ;
Bellone, Mauro ;
Svensson, Lennart ;
Wande, Mattias .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 111 :125-131
[6]   A Frustum-based probabilistic framework for 3D object detection by fusion of LiDAR and camera data [J].
Gong, Zheng ;
Lin, Haojia ;
Zhang, Dedong ;
Luo, Zhipeng ;
Zelek, John ;
Chen, Yiping ;
Nurunnabi, Abdul ;
Wang, Cheng ;
Li, Jonathan .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2020, 159 :90-100
[7]  
Gu S, 2019, IEEE INT CONF ROBOT, P3832, DOI [10.1109/icra.2019.8793585, 10.1109/ICRA.2019.8793585]
[8]  
Ku J, 2018, IEEE INT C INT ROBOT, P5750, DOI 10.1109/IROS.2018.8594049
[9]   PointPillars: Fast Encoders for Object Detection from Point Clouds [J].
Lang, Alex H. ;
Vora, Sourabh ;
Caesar, Holger ;
Zhou, Lubing ;
Yang, Jiong ;
Beijbom, Oscar .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :12689-12697
[10]   Deep learning [J].
LeCun, Yann ;
Bengio, Yoshua ;
Hinton, Geoffrey .
NATURE, 2015, 521 (7553) :436-444