Radar and Vision Deep Multi-Level Feature Fusion Based on Deep Learning

被引:0
作者
Zhang Zhouping [1 ,2 ]
Yu Qin [3 ]
Wang Xiaoliang [3 ]
Zhang Qiancheng [2 ]
Bin Xin [2 ]
机构
[1] Nanchang Automot Inst Intelligence & New Energy, Nanchang, Jiangxi, Peoples R China
[2] Tongji Univ, Shanghai, Peoples R China
[3] Jiangxi Isuzu Motors Co Ltd, Nanchang, Jiangxi, Peoples R China
来源
2024 4TH INTERNATIONAL CONFERENCE ON COMPUTER, CONTROL AND ROBOTICS, ICCCR 2024 | 2024年
关键词
intelligent driving; deep learning; radar; vision; multi-level feature fusion;
D O I
10.1109/ICCCR61138.2024.10585567
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Based on deep learning, a perception network structure of radar and vision depth multi-level feature fusion is designed to realize the effective perception of intelligent driving in foggy scenes. Aiming at the problem of low efficiency when traditional feature fusion methods are applied to heterogeneous sensors with asymmetric feature information, spatial attention fusion is used to realize asymmetric feature fusion of radar and vision. Aiming at the problem that the fusion level of radar and vision low-level single features is shallow, and the fusion features are difficult to transfer to the deep network, a deep multi-level feature fusion structure is constructed to realize the multi-stage fusion of radar and vision features. At the same time, the attention mechanism module is introduced to integrate the fusion features and extract key information. Finally, the validity of the network is verified by testing. The obtained defogging results are sent to the network for testing, which further verifies the effectiveness of the fusion defogging algorithm. Through comparative experiments, it is proved that the fusion perception network can effectively improve the detection accuracy and recall rate of the intelligent vehicle perception system in foggy scenes.
引用
收藏
页码:81 / 88
页数:8
相关论文
共 21 条
  • [1] Bijelic M, 2020, PROC CVPR IEEE, P11679, DOI 10.1109/CVPR42600.2020.01170
  • [2] Radar and stereo vision fusion for multitarget tracking on the special Euclidean group
    Cesic, Josip
    Markovic, Ivan
    Cvisic, Igor
    Petrovic, Ivan
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2016, 83 : 338 - 348
  • [3] Chadwick S, 2019, IEEE INT CONF ROBOT, P8311, DOI [10.1109/ICRA.2019.8794312, 10.1109/icra.2019.8794312]
  • [4] Spatial Attention Fusion for Obstacle Detection Using MmWave Radar and Vision Sensor
    Chang, Shuo
    Zhang, Yifan
    Zhang, Fan
    Zhao, Xiaotong
    Huang, Sai
    Feng, Zhiyong
    Wei, Zhiqing
    [J]. SENSORS, 2020, 20 (04)
  • [5] Chavez-Garcia RO, 2012, 2012 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), P159, DOI 10.1109/IVS.2012.6232307
  • [6] Chu S Y, 2023, P IEEE CVF WINT C AP, P5252
  • [7] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [8] Glorot X., 2010, P 13 INT C ART INT S, V9, P249
  • [9] Pedestrian Detection Based on Fusion of Millimeter Wave Radar and Vision
    Guo, Xiao-peng
    Du, Jin-song
    Gao, Jie
    Wang, Wei
    [J]. 2018 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND PATTERN RECOGNITION (AIPR 2018), 2018, : 38 - 42
  • [10] Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1026 - 1034