V2I-BEVF: Multi-modal Fusion Based on BEV Representation for Vehicle-Infrastructure Perception

被引:0
作者
Xiang, Chao [1 ,3 ]
Xie, Xiaopo [1 ]
Feng, Chen [1 ,2 ]
Bai, Zhen
Niu, Zhendong [3 ]
Yang, Mingchuan [1 ]
机构
[1] China Telecom Res Inst, Beijing 102209, Peoples R China
[2] China Telecom Corp Ltd, Technol Innovat Dept, Beijing 100032, Peoples R China
[3] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
来源
2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC | 2023年
关键词
VOXELNET;
D O I
10.1109/ITSC57777.2023.10421963
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As one of the core modules of autonomous driving technology, environment perception has gradually become a hot research topic in industry and academia in recent years. However, self-driving vehicles face safety challenges due to the existence of perceptual blind spots and the lack of remote sensing capability. In this paper, a multi-modal fusion based on BEV representation for Vehicle-Infrastructure perception is proposed, referred to as V2I-BEVF, which mainly contains two branch networks for feature extraction from 2D images and 3D point clouds and transform them into BEV features, then use Deformable Attention Transformer to fuse and decode them in order to achieve high-precision real-time perception of road traffic participants. The V2I-BEVF algorithm proposed in this paper experimentally verified on the open-source roadside DAIR-V2X-I dataset from Tsinghua University and Baidu. The experimental results show that compared to several algorithm benchmarks provided by the DAIR-V2X-I dataset, the V2I-BEVF algorithm has a large improvement in pedestrian detection accuracy. Simultaneously, we verified the effectiveness of the proposed method on our collected dataset of roadside sensor devices. The V2I-BEVF algorithm can be combined with 5G/V2X communication technology and applied to V2I collaborative perception scenarios to take full advantage of wide roadside environmental perception vision and the small blind area.
引用
收藏
页码:5292 / 5299
页数:8
相关论文
empty
未找到相关数据