3D Vehicle Trajectory Extraction Using DCNN in an Overlapping Multi-Camera Crossroad Scene

被引:3
作者
Heo, Jinyeong [1 ]
Kwon, Yongjin [1 ]
机构
[1] Ajou Univ, Dept Ind Engn, Suwon 16499, South Korea
基金
新加坡国家研究基金会;
关键词
camera calibration; multi-object tracking; overlapping multi-camera crossroad scene; 3D bounding box estimation; 3D trajectory extraction;
D O I
10.3390/s21237879
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The 3D vehicle trajectory in complex traffic conditions such as crossroads and heavy traffic is practically very useful in autonomous driving. In order to accurately extract the 3D vehicle trajectory from a perspective camera in a crossroad where the vehicle has an angular range of 360 degrees, problems such as the narrow visual angle in single-camera scene, vehicle occlusion under conditions of low camera perspective, and lack of vehicle physical information must be solved. In this paper, we propose a method for estimating the 3D bounding boxes of vehicles and extracting trajectories using a deep convolutional neural network (DCNN) in an overlapping multi-camera crossroad scene. First, traffic data were collected using overlapping multi-cameras to obtain a wide range of trajectories around the crossroad. Then, 3D bounding boxes of vehicles were estimated and tracked in each single-camera scene through DCNN models (YOLOv4, multi-branch CNN) combined with camera calibration. Using the abovementioned information, the 3D vehicle trajectory could be extracted on the ground plane of the crossroad by calculating results obtained from the overlapping multi-camera with a homography matrix. Finally, in experiments, the errors of extracted trajectories were corrected through a simple linear interpolation and regression, and the accuracy of the proposed method was verified by calculating the difference with ground-truth data. Compared with other previously reported methods, our approach is shown to be more accurate and more practical.
引用
收藏
页数:19
相关论文
共 33 条
  • [1] Vehicle Trajectory Prediction and Collision Warning via Fusion of Multisensors and Wireless Vehicular Communications
    Baek, Minjin
    Jeong, Donggi
    Choi, Dongho
    Lee, Sangsun
    [J]. SENSORS, 2020, 20 (01)
  • [2] Bewley A, 2016, IEEE IMAGE PROC, P3464, DOI 10.1109/ICIP.2016.7533003
  • [3] Bochkovskiy A., 2020, PREPRINT, DOI DOI 10.48550/ARXIV.2004.10934
  • [4] Castaneda J. N., 2011, Proceedings of the 2011 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2011), P591, DOI 10.1109/DICTA.2011.105
  • [5] Dangerous driving behavior detection using video-extracted vehicle trajectory histograms
    Chen, Zhijun
    Wu, Chaozhong
    Huang, Zhen
    Lyu, Nengchao
    Hu, Zhaozheng
    Zhong, Ming
    Cheng, Yang
    Ran, Bin
    [J]. JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2017, 21 (05) : 409 - 421
  • [6] Object Detection with Discriminatively Trained Part-Based Models
    Felzenszwalb, Pedro F.
    Girshick, Ross B.
    McAllester, David
    Ramanan, Deva
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (09) : 1627 - 1645
  • [7] Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
  • [8] Girshick R., 2017, rich feature hierarchies for accurate object detection and semantic segmentation tech report (v5)
  • [9] Fast R-CNN
    Girshick, Ross
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1440 - 1448
  • [10] Real time traffic states estimation on arterials based on trajectory data
    Hiribarren, Gabriel
    Carlos Herrera, Juan
    [J]. TRANSPORTATION RESEARCH PART B-METHODOLOGICAL, 2014, 69 : 19 - 30