Enhanced Object Detection in Autonomous Vehicles through LiDAR-Camera Sensor Fusion

被引:3
|
作者
Dai, Zhongmou [1 ,2 ]
Guan, Zhiwei [1 ,3 ]
Chen, Qiang [1 ]
Xu, Yi [4 ,5 ]
Sun, Fengyi [1 ]
机构
[1] Tianjin Univ Technol & Educ, Sch Automobile & Transportat, Tianjin 300222, Peoples R China
[2] Shandong Transport Vocat Coll, Weifang 261206, Peoples R China
[3] Tianjin Sino German Univ Appl Sci, Sch Automobile & Rail Transportat, Tianjin 300350, Peoples R China
[4] Natl & Local Joint Engn Res Ctr Intelligent Vehicl, Tianjin 300222, Peoples R China
[5] QINGTE Grp Co Ltd, Qingdao 266106, Peoples R China
来源
WORLD ELECTRIC VEHICLE JOURNAL | 2024年 / 15卷 / 07期
关键词
autonomous vehicles; object detection; object tracking; LiDAR-camera fusion; improved DeepSORT; EXTRINSIC CALIBRATION; TRACKING;
D O I
10.3390/wevj15070297
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
To realize accurate environment perception, which is the technological key to enabling autonomous vehicles to interact with their external environments, it is primarily necessary to solve the issues of object detection and tracking in the vehicle-movement process. Multi-sensor fusion has become an essential process in efforts to overcome the shortcomings of individual sensor types and improve the efficiency and reliability of autonomous vehicles. This paper puts forward moving object detection and tracking methods based on LiDAR-camera fusion. Operating based on the calibration of the camera and LiDAR technology, this paper uses YOLO and PointPillars network models to perform object detection based on image and point cloud data. Then, a target box intersection-over-union (IoU) matching strategy, based on center-point distance probability and the improved Dempster-Shafer (D-S) theory, is used to perform class confidence fusion to obtain the final fusion detection result. In the process of moving object tracking, the DeepSORT algorithm is improved to address the issue of identity switching resulting from dynamic objects re-emerging after occlusion. An unscented Kalman filter is utilized to accurately predict the motion state of nonlinear objects, and object motion information is added to the IoU matching module to improve the matching accuracy in the data association process. Through self-collected data verification, the performances of fusion detection and tracking are judged to be significantly better than those of a single sensor. The evaluation indexes of the improved DeepSORT algorithm are 66% for MOTA and 79% for MOTP, which are, respectively, 10% and 5% higher than those of the original DeepSORT algorithm. The improved DeepSORT algorithm effectively solves the problem of tracking instability caused by the occlusion of moving objects.
引用
收藏
页数:24
相关论文
共 50 条
  • [31] FAFNs: Frequency-Aware LiDAR-Camera Fusion Networks for 3-D Object Detection
    Wang, Jingxuan
    Lu, Yuanyao
    Jiang, Haiyang
    IEEE SENSORS JOURNAL, 2023, 23 (24) : 30847 - 30857
  • [32] StrongFusionMOT: A Multi-Object Tracking Method Based on LiDAR-Camera Fusion
    Wang, Xiyang
    Fu, Chunyun
    He, Jiawei
    Wang, Sujuan
    Wang, Jianwen
    IEEE SENSORS JOURNAL, 2023, 23 (11) : 11241 - 11252
  • [33] Fusion of an RGB camera and LiDAR sensor through a Graph CNN for 3D object detection
    Choi, Jinsol
    Shin, Minwoo
    Paik, Joonki
    OPTICS CONTINUUM, 2023, 2 (05): : 1166 - 1179
  • [34] EVP-LCO: LiDAR-Camera Odometry Enhancing Vehicle Positioning for Autonomous Vehicles
    Xun, Yijie
    Dong, Hao
    Ma, Xiaochuan
    Mao, Bomin
    Guo, Hongzhi
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (07): : 9252 - 9263
  • [35] Obstacle Detection for Autonomous Driving Vehicles With Multi-LiDAR Sensor Fusion
    Cao, Mingcong
    Wang, Junmin
    JOURNAL OF DYNAMIC SYSTEMS MEASUREMENT AND CONTROL-TRANSACTIONS OF THE ASME, 2020, 142 (02):
  • [36] A Combined LiDAR-Camera Localization for Autonomous Race Cars
    Sauerbeck, Florian
    Baierlein, Lucas
    Betz, Johannes
    Lienkamp, Markus
    SAE INTERNATIONAL JOURNAL OF CONNECTED AND AUTOMATED VEHICLES, 2022, 5 (01):
  • [37] Object detection using depth completion and camera-LiDAR fusion for autonomous driving
    Carranza-Garcia, Manuel
    Javier Galan-Sales, F.
    Maria Luna-Romera, Jose
    Riquelme, Jose C.
    INTEGRATED COMPUTER-AIDED ENGINEERING, 2022, 29 (03) : 241 - 258
  • [38] MSGFusion: Muti-scale Semantic Guided LiDAR-Camera Fusion for 3D Object Detection
    Zhu, Huming
    Xue, Yiyu
    Cheng, Xinyue
    Hou, Biao
    2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
  • [39] PLC-Fusion: Perspective-Based Hierarchical and Deep LiDAR Camera Fusion for 3D Object Detection in Autonomous Vehicles
    Mushtaq, Husnain
    Deng, Xiaoheng
    Azhar, Fizza
    Ali, Mubashir
    Sherazi, Hafiz Husnain Raza
    INFORMATION, 2024, 15 (11)
  • [40] LiDAR-camera fusion: dual-scale correction for vehicle multi-object detection and trajectory extraction
    Fu, Ting
    Xie, Shuke
    Hu, Weichao
    Wang, Junhua
    Cui, Zixuan
    JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2024,