Enhanced Object Detection in Autonomous Vehicles through LiDAR-Camera Sensor Fusion

被引:4
|
作者
Dai, Zhongmou [1 ,2 ]
Guan, Zhiwei [1 ,3 ]
Chen, Qiang [1 ]
Xu, Yi [4 ,5 ]
Sun, Fengyi [1 ]
机构
[1] Tianjin Univ Technol & Educ, Sch Automobile & Transportat, Tianjin 300222, Peoples R China
[2] Shandong Transport Vocat Coll, Weifang 261206, Peoples R China
[3] Tianjin Sino German Univ Appl Sci, Sch Automobile & Rail Transportat, Tianjin 300350, Peoples R China
[4] Natl & Local Joint Engn Res Ctr Intelligent Vehicl, Tianjin 300222, Peoples R China
[5] QINGTE Grp Co Ltd, Qingdao 266106, Peoples R China
来源
WORLD ELECTRIC VEHICLE JOURNAL | 2024年 / 15卷 / 07期
关键词
autonomous vehicles; object detection; object tracking; LiDAR-camera fusion; improved DeepSORT; EXTRINSIC CALIBRATION; TRACKING;
D O I
10.3390/wevj15070297
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
To realize accurate environment perception, which is the technological key to enabling autonomous vehicles to interact with their external environments, it is primarily necessary to solve the issues of object detection and tracking in the vehicle-movement process. Multi-sensor fusion has become an essential process in efforts to overcome the shortcomings of individual sensor types and improve the efficiency and reliability of autonomous vehicles. This paper puts forward moving object detection and tracking methods based on LiDAR-camera fusion. Operating based on the calibration of the camera and LiDAR technology, this paper uses YOLO and PointPillars network models to perform object detection based on image and point cloud data. Then, a target box intersection-over-union (IoU) matching strategy, based on center-point distance probability and the improved Dempster-Shafer (D-S) theory, is used to perform class confidence fusion to obtain the final fusion detection result. In the process of moving object tracking, the DeepSORT algorithm is improved to address the issue of identity switching resulting from dynamic objects re-emerging after occlusion. An unscented Kalman filter is utilized to accurately predict the motion state of nonlinear objects, and object motion information is added to the IoU matching module to improve the matching accuracy in the data association process. Through self-collected data verification, the performances of fusion detection and tracking are judged to be significantly better than those of a single sensor. The evaluation indexes of the improved DeepSORT algorithm are 66% for MOTA and 79% for MOTP, which are, respectively, 10% and 5% higher than those of the original DeepSORT algorithm. The improved DeepSORT algorithm effectively solves the problem of tracking instability caused by the occlusion of moving objects.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] Improving Radar-Camera Fusion-based 3D Object Detection for Autonomous Vehicles
    Kurniawan, Irfan Tito
    Trilaksono, Bambang Riyanto
    2022 12TH INTERNATIONAL CONFERENCE ON SYSTEM ENGINEERING AND TECHNOLOGY (ICSET 2022), 2022, : 42 - 47
  • [42] LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles
    Kumar, G. Ajay
    Lee, Jin Hee
    Hwang, Jongrak
    Park, Jaehyeong
    Youn, Sung Hoon
    Kwon, Soon
    SYMMETRY-BASEL, 2020, 12 (02):
  • [43] Object Tracking Based on the Fusion of Roadside LiDAR and Camera Data
    Wang, Shujian
    Pi, Rendong
    Li, Jian
    Guo, Xinming
    Lu, Youfu
    Li, Tao
    Tian, Yuan
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [44] Sensor Data Fusion of LIDAR with Stereo RGB-D Camera for Object Tracking
    Dieterle, Thomas
    Particke, Florian
    Patino-Studencki, Lucila
    Thielecke, Joern
    2017 IEEE SENSORS, 2017, : 1173 - 1175
  • [45] Sensors and Sensor Fusion in Autonomous Vehicles
    Kocic, Jelena
    Jovicic, Nenad
    Drndarevic, Vujo
    2018 26TH TELECOMMUNICATIONS FORUM (TELFOR), 2018, : 575 - 578
  • [46] Enhancing Object Estimation by Camera-LiDAR Sensor Fusion Using IMM-KF With Error Characteristics in Autonomous Robot Systems
    Ho Lee, Sun
    Choi, Woo Young
    IEEE ACCESS, 2024, 12 : 197247 - 197258
  • [47] Toward Robust LiDAR-Camera Fusion in BEV Space via Mutual Deformable Attention and Temporal Aggregation
    Wang, Jian
    Li, Fan
    An, Yi
    Zhang, Xuchong
    Sun, Hongbin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 5753 - 5764
  • [48] Survey on Image and Point-Cloud Fusion-Based Object Detection in Autonomous Vehicles
    Peng, Ying
    Qin, Yechen
    Tang, Xiaolin
    Zhang, Zhiqiang
    Deng, Lei
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (12) : 22772 - 22789
  • [49] CoFF: Cooperative Spatial Feature Fusion for 3-D Object Detection on Autonomous Vehicles
    Guo, Jingda
    Carrillo, Dominic
    Tang, Sihai
    Chen, Qi
    Yang, Qing
    Fu, Song
    Wang, Xi
    Wang, Nannan
    Palacharla, Paparao
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (14) : 11078 - 11087
  • [50] BEV-CFKT: A LiDAR-camera cross-modality-interaction fusion and knowledge transfer framework with transformer for BEV 3D object detection
    Wei, Ming
    Li, Jiachen
    Kang, Hongyi
    Huang, Yijie
    Lu, Jun-Guo
    NEUROCOMPUTING, 2024, 582