Informative Data Selection With Uncertainty for Multimodal Object Detection

被引:2
作者
Zhang, Xinyu [1 ,2 ]
Li, Zhiwei [3 ]
Zou, Zhenhong [2 ,4 ]
Gao, Xin [5 ]
Xiong, Yijin [5 ]
Jin, Dafeng [2 ,4 ]
Li, Jun [2 ,4 ]
Liu, Huaping [6 ]
机构
[1] Beihang Univ, Sch Transportat Sci & Engn, Beijing 100191, Peoples R China
[2] Tsinghua Univ, State Key Lab Automot Safety & Energy, Beijing 100084, Peoples R China
[3] Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing 100029, Peoples R China
[4] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100084, Peoples R China
[5] China Univ Min & Technol Beijing, Comp Sci & Technol, Beijing 100083, Peoples R China
[6] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
基金
中国国家自然科学基金; 国家高技术研究发展计划(863计划);
关键词
Data models; Adaptation models; Object detection; Uncertainty; Noise measurement; Feature extraction; Robustness; Autonomous driving; multimodal fusion; noise; object detection; NETWORK; LINE;
D O I
10.1109/TNNLS.2023.3270159
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Noise has always been nonnegligible trouble in object detection by creating confusion in model reasoning, thereby reducing the informativeness of the data. It can lead to inaccurate recognition due to the shift in the observed pattern, that requires a robust generalization of the models. To implement a general vision model, we need to develop deep learning models that can adaptively select valid information from multimodal data. This is mainly based on two reasons. Multimodal learning can break through the inherent defects of single-modal data, and adaptive information selection can reduce chaos in multimodal data. To tackle this problem, we propose a universal uncertainty-aware multimodal fusion model. It adopts a multipipeline loosely coupled architecture to combine the features and results from point clouds and images. To quantify the correlation in multimodal information, we model the uncertainty, as the inverse of data information, in different modalities and embed it in the bounding box generation. In this way, our model reduces the randomness in fusion and generates reliable output. Moreover, we conducted a completed investigation on the KITTI 2-D object detection dataset and its derived dirty data. Our fusion model is proven to resist severe noise interference like Gaussian, motion blur, and frost, with only slight degradation. The experiment results demonstrate the benefits of our adaptive fusion. Our analysis on the robustness of multimodal fusion will provide further insights for future research.
引用
收藏
页码:13561 / 13573
页数:13
相关论文
共 55 条
[51]   Adaptive Context-Aware Multi-Modal Network for Depth Completion [J].
Zhao, Shanshan ;
Gong, Mingming ;
Fu, Huan ;
Tao, Dacheng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :5264-5276
[52]   A Comprehensive Survey on Transfer Learning [J].
Zhuang, Fuzhen ;
Qi, Zhiyuan ;
Duan, Keyu ;
Xi, Dongbo ;
Zhu, Yongchun ;
Zhu, Hengshu ;
Xiong, Hui ;
He, Qing .
PROCEEDINGS OF THE IEEE, 2021, 109 (01) :43-76
[53]  
Zou Z., 2021, ARXIV
[54]  
Zou Z., 2020, P INT C COGN SYST SI, P142
[55]   A novel multimodal fusion network based on a joint coding model for lane line segmentation [J].
Zou, Zhenhong ;
Zhang, Xinyu ;
Liu, Huaping ;
Li, Zhiwei ;
Hussain, Amir ;
Li, Jun .
INFORMATION FUSION, 2022, 80 :167-178