LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles

被引:64
|
作者
Kumar, G. Ajay [1 ]
Lee, Jin Hee [1 ]
Hwang, Jongrak [1 ]
Park, Jaehyeong [1 ]
Youn, Sung Hoon [1 ]
Kwon, Soon [1 ]
机构
[1] DGIST, Div Automot Technol, Daegu 42988, South Korea
来源
SYMMETRY-BASEL | 2020年 / 12卷 / 02期
基金
新加坡国家研究基金会;
关键词
computational geometry transformation; projection; sensor fusion; self-driving vehicle; sensor calibration; depth sensing; point cloud to image mapping; autonomous vehicle; EXTRINSIC CALIBRATION; 3D LIDAR; REGISTRATION; SYSTEM;
D O I
10.3390/sym12020324
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation, and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the detection of objects at short and long distances. As both the sensors are capable of capturing the different attributes of the environment simultaneously, the integration of those attributes with an efficient fusion approach greatly benefits the reliable and consistent perception of the environment. This paper presents a method to estimate the distance (depth) between a self-driving car and other vehicles, objects, and signboards on its path using the accurate fusion approach. Based on the geometrical transformation and projection, low-level sensor fusion was performed between a camera and LiDAR using a 3D marker. Further, the fusion information is utilized to estimate the distance of objects detected by the RefineDet detector. Finally, the accuracy and performance of the sensor fusion and distance estimation approach were evaluated in terms of quantitative and qualitative analysis by considering real road and simulation environment scenarios. Thus the proposed low-level sensor fusion, based on the computational geometric transformation and projection for object distance estimation proves to be a promising solution for enabling reliable and consistent environment perception ability for autonomous vehicles.
引用
收藏
页数:23
相关论文
共 50 条
  • [21] SELF-DRIVING VEHICLES IN URBAN ENVIRONMENTS
    Katerina, Kochova
    Martinez, Felipe
    12TH INTERNATIONAL DAYS OF STATISTICS AND ECONOMICS, 2018, : 864 - 873
  • [22] Self-Driving Vehicles, Autonomy and Justice
    Bracanovic, Tomislav
    NOVA PRISUTNOST, 2024, 22 (03): : 563 - 578
  • [23] A Merging Protocol for Self-Driving Vehicles
    Aoki, Shunsuke
    Rajkumar, Ragunathan
    2017 ACM/IEEE 8TH INTERNATIONAL CONFERENCE ON CYBER-PHYSICAL SYSTEMS (ICCPS), 2017, : 219 - 228
  • [24] Dynamic Intersections and Self-Driving Vehicles
    Aoki, Shunsuke
    Rajkumar, Ragunathan
    2018 9TH ACM/IEEE INTERNATIONAL CONFERENCE ON CYBER-PHYSICAL SYSTEMS (ICCPS 2018), 2018, : 320 - 330
  • [25] Promoting trust in self-driving vehicles
    Cristina Olaverri-Monreal
    Nature Electronics, 2020, 3 : 292 - 294
  • [26] Real time object detection using LiDAR and camera fusion for autonomous driving
    Liu, Haibin
    Wu, Chao
    Wang, Huanjie
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [27] Real time object detection using LiDAR and camera fusion for autonomous driving
    Haibin Liu
    Chao Wu
    Huanjie Wang
    Scientific Reports, 13
  • [28] Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving
    Banerjee, Koyel
    Notz, Dominik
    Windelen, Johannes
    Gavarraju, Sumanth
    He, Mingkang
    2018 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2018, : 1632 - 1638
  • [29] DW-YOLO: An Efficient Object Detector for Drones and Self-driving Vehicles
    Chen, Yunfan
    Zheng, Wenqi
    Zhao, Yangyi
    Song, Tae Hun
    Shin, Hyunchul
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2023, 48 (02) : 1427 - 1436
  • [30] Textual Explanations for Self-Driving Vehicles
    Kim, Jinkyu
    Rohrbach, Anna
    Darrell, Trevor
    Canny, John
    Akata, Zeynep
    COMPUTER VISION - ECCV 2018, PT II, 2018, 11206 : 577 - 593