Enhanced Object Detection in Autonomous Vehicles through LiDAR-Camera Sensor Fusion

被引:3
|
作者
Dai, Zhongmou [1 ,2 ]
Guan, Zhiwei [1 ,3 ]
Chen, Qiang [1 ]
Xu, Yi [4 ,5 ]
Sun, Fengyi [1 ]
机构
[1] Tianjin Univ Technol & Educ, Sch Automobile & Transportat, Tianjin 300222, Peoples R China
[2] Shandong Transport Vocat Coll, Weifang 261206, Peoples R China
[3] Tianjin Sino German Univ Appl Sci, Sch Automobile & Rail Transportat, Tianjin 300350, Peoples R China
[4] Natl & Local Joint Engn Res Ctr Intelligent Vehicl, Tianjin 300222, Peoples R China
[5] QINGTE Grp Co Ltd, Qingdao 266106, Peoples R China
来源
WORLD ELECTRIC VEHICLE JOURNAL | 2024年 / 15卷 / 07期
关键词
autonomous vehicles; object detection; object tracking; LiDAR-camera fusion; improved DeepSORT; EXTRINSIC CALIBRATION; TRACKING;
D O I
10.3390/wevj15070297
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
To realize accurate environment perception, which is the technological key to enabling autonomous vehicles to interact with their external environments, it is primarily necessary to solve the issues of object detection and tracking in the vehicle-movement process. Multi-sensor fusion has become an essential process in efforts to overcome the shortcomings of individual sensor types and improve the efficiency and reliability of autonomous vehicles. This paper puts forward moving object detection and tracking methods based on LiDAR-camera fusion. Operating based on the calibration of the camera and LiDAR technology, this paper uses YOLO and PointPillars network models to perform object detection based on image and point cloud data. Then, a target box intersection-over-union (IoU) matching strategy, based on center-point distance probability and the improved Dempster-Shafer (D-S) theory, is used to perform class confidence fusion to obtain the final fusion detection result. In the process of moving object tracking, the DeepSORT algorithm is improved to address the issue of identity switching resulting from dynamic objects re-emerging after occlusion. An unscented Kalman filter is utilized to accurately predict the motion state of nonlinear objects, and object motion information is added to the IoU matching module to improve the matching accuracy in the data association process. Through self-collected data verification, the performances of fusion detection and tracking are judged to be significantly better than those of a single sensor. The evaluation indexes of the improved DeepSORT algorithm are 66% for MOTA and 79% for MOTP, which are, respectively, 10% and 5% higher than those of the original DeepSORT algorithm. The improved DeepSORT algorithm effectively solves the problem of tracking instability caused by the occlusion of moving objects.
引用
收藏
页数:24
相关论文
共 50 条
  • [21] FGFusion: Fine-Grained Lidar-Camera Fusion for 3D Object Detection
    Yin, Zixuan
    Sun, Han
    Liu, Ningzhong
    Zhou, Huiyu
    Shen, Jiaquan
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT III, 2024, 14427 : 505 - 517
  • [22] Deep structural information fusion for 3D object detection on LiDAR-camera system
    An, Pei
    Liang, Junxiong
    Yu, Kun
    Fang, Bin
    Ma, Jie
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2022, 214
  • [23] FusionRCNN: LiDAR-Camera Fusion for Two-Stage 3D Object Detection
    Xu, Xinli
    Dong, Shaocong
    Xu, Tingfa
    Ding, Lihe
    Wang, Jie
    Jiang, Peng
    Song, Liqiang
    Li, Jianan
    REMOTE SENSING, 2023, 15 (07)
  • [24] End-to-End Lidar-Camera Self-Calibration for Autonomous Vehicles
    Rachman, Arya
    Seiler, Jurgen
    Kaup, Andre
    2023 IEEE INTELLIGENT VEHICLES SYMPOSIUM, IV, 2023,
  • [25] Real time object detection using LiDAR and camera fusion for autonomous driving
    Liu, Haibin
    Wu, Chao
    Wang, Huanjie
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [26] Real time object detection using LiDAR and camera fusion for autonomous driving
    Haibin Liu
    Chao Wu
    Huanjie Wang
    Scientific Reports, 13
  • [27] Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving
    Banerjee, Koyel
    Notz, Dominik
    Windelen, Johannes
    Gavarraju, Sumanth
    He, Mingkang
    2018 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2018, : 1632 - 1638
  • [28] DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection
    Li, Yingwei
    Yu, Adams Wei
    Meng, Tianjian
    Caine, Ben
    Ngiam, Jiquan
    Peng, Daiyi
    Shen, Junyang
    Lu, Yifeng
    Zhou, Denny
    Le, Quoc, V
    Yuille, Alan
    Tan, Mingxing
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 17161 - 17170
  • [29] A LiDAR-Camera Calibration and Sensor Fusion Method with Edge Effect Elimination
    Lin, Yili
    Fei, Yifan
    Gao, Yunhan
    Shi, Hang
    Xie, Yangmin
    2022 17TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2022, : 28 - 34
  • [30] SparseLIF: High-Performance Sparse LiDAR-Camera Fusion for 3D Object Detection
    Zhang, Hongcheng
    Liang, Liu
    Zeng, Pengxin
    Song, Xiao
    Wang, Zhe
    COMPUTER VISION-ECCV 2024, PT XXXV, 2025, 15093 : 109 - 128