Deep Dual-Modal Traffic Objects Instance Segmentation Method Using Camera and LIDAR Data for Autonomous Driving

被引:20
作者
Geng, Keke [1 ]
Dong, Ge [2 ]
Yin, Guodong [1 ]
Hu, Jingyu [1 ]
机构
[1] Southeast Univ, Sch Mech Engn, Nanjing 211189, Peoples R China
[2] Tsinghua Univ, Inst Aeronaut & Astronaut, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
autonomous driving; traffic objects instance segmentation; deep learning network; VISION;
D O I
10.3390/rs12203274
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Recent advancements in environmental perception for autonomous vehicles have been driven by deep learning-based approaches. However, effective traffic target detection in complex environments remains a challenging task. This paper presents a novel dual-modal instance segmentation deep neural network (DM-ISDNN) by merging camera and LIDAR data, which can be used to deal with the problem of target detection in complex environments efficiently based on multi-sensor data fusion. Due to the sparseness of the LIDAR point cloud data, we propose a weight assignment function that assigns different weight coefficients to different feature pyramid convolutional layers for the LIDAR sub-network. We compare and analyze the adaptations of early-, middle-, and late-stage fusion architectures in depth. By comprehensively considering the detection accuracy and detection speed, the middle-stage fusion architecture with a weight assignment mechanism, with the best performance, is selected. This work has great significance for exploring the best feature fusion scheme of a multi-modal neural network. In addition, we apply a mask distribution function to improve the quality of the predicted mask. A dual-modal traffic object instance segmentation dataset is established using a 7481 camera and LIDAR data pairs from the KITTI dataset, with 79,118 manually annotated instance masks. To the best of our knowledge, there is no existing instance annotation for the KITTI dataset with such quality and volume. A novel dual-modal dataset, composed of 14,652 camera and LIDAR data pairs, is collected using our own developed autonomous vehicle under different environmental conditions in real driving scenarios, for which a total of 62,579 instance masks are obtained using semi-automatic annotation method. This dataset can be used to validate the detection performance under complex environmental conditions of instance segmentation networks. Experimental results on the dual-modal KITTI Benchmark demonstrate that DM-ISDNN using middle-stage data fusion and the weight assignment mechanism has better detection performance than single- and dual-modal networks with other data fusion strategies, which validates the robustness and effectiveness of the proposed method. Meanwhile, compared to the state-of-the-art instance segmentation networks, our method shows much better detection performance, in terms of AP and F1 score, on the dual-modal dataset collected under complex environmental conditions, which further validates the superiority of our method.
引用
收藏
页码:1 / 22
页数:22
相关论文
共 30 条
  • [1] Robust and Computationally Lightweight Autonomous Tracking of Vehicle Taillights and Signal Detection by Embedded Smart Cameras
    Almagambetov, Akhan
    Velipasalar, Senem
    Casares, Mauricio
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2015, 62 (06) : 3732 - 3741
  • [2] [Anonymous], 2017, P IEEE INT C COMP VI
  • [3] [Anonymous], 2017, IEEE Computer Society, DOI [DOI 10.1109/CVPR.2017.453, DOI 10.1109/CVPR.2017.106]
  • [4] YOLACT Real-time Instance Segmentation
    Bolya, Daniel
    Zhou, Chong
    Xiao, Fanyi
    Lee, Yong Jae
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 9156 - 9165
  • [5] Multiple Sensor Fusion and Classification for Moving Object Detection and Tracking
    Chavez-Garcia, Ricardo Omar
    Aycard, Olivier
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2016, 17 (02) : 525 - 534
  • [6] Multi-View 3D Object Detection Network for Autonomous Driving
    Chen, Xiaozhi
    Ma, Huimin
    Wan, Ji
    Li, Bo
    Xia, Tian
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6526 - 6534
  • [7] Vision-Based Vehicle Detection System With Consideration of the Detecting Location
    Cheon, Minkyu
    Lee, Wonju
    Yoon, Changyong
    Park, Mignon
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2012, 13 (03) : 1243 - 1252
  • [8] Eitel A, 2015, IEEE INT C INT ROBOT, P681, DOI 10.1109/IROS.2015.7353446
  • [9] Feng D., DEEP MULTIMODAL TARG
  • [10] Fu C.Y., RETINAMASK LEARNING