MonoFENet: Monocular 3D Object Detection With Feature Enhancement Networks

被引:60
作者
Bao, Wentao [1 ]
Xu, Bin [1 ]
Chen, Zhenzhong [1 ]
机构
[1] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
3D object detection; monocular images; feature enhancement; neural networks; autonomous driving;
D O I
10.1109/TIP.2019.2952201
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Monocular 3D object detection has the merit of low cost and can be served as an auxiliary module for autonomous driving system, becoming a growing concern in recent years. In this paper, we present a monocular 3D object detection method with feature enhancement networks, which we call MonoFENet. Specifically, with the estimated disparity from the input monocular image, the features of both the 2D and 3D streams can be enhanced and utilized for accurate 3D localization. For the 2D stream, the input image is used to generate 2D region proposals as well as to extract appearance features. For the 3D stream, the estimated disparity is transformed into 3D dense point cloud, which is then enhanced by the associated front view maps. With the RoI Mean Pooling layer, 3D geometric features of RoI point clouds are further enhanced by the proposed point feature enhancement (PointFE) network. The region-wise features of image and point cloud are fused for the final 2D and 3D bounding boxes regression. The experimental results on the KITTI benchmark reveal that our method can achieve state-of-the-art performance for monocular 3D object detection.
引用
收藏
页码:2753 / 2765
页数:13
相关论文
共 71 条
[71]  
Zitnick CL, 2014, LECT NOTES COMPUT SC, V8693, P391, DOI 10.1007/978-3-319-10602-1_26