Informative Data Selection With Uncertainty for Multimodal Object Detection

被引:1
作者
Zhang, Xinyu [1 ,2 ]
Li, Zhiwei [3 ]
Zou, Zhenhong [2 ,4 ]
Gao, Xin [5 ]
Xiong, Yijin [5 ]
Jin, Dafeng [2 ,4 ]
Li, Jun [2 ,4 ]
Liu, Huaping [6 ]
机构
[1] Beihang Univ, Sch Transportat Sci & Engn, Beijing 100191, Peoples R China
[2] Tsinghua Univ, State Key Lab Automot Safety & Energy, Beijing 100084, Peoples R China
[3] Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing 100029, Peoples R China
[4] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100084, Peoples R China
[5] China Univ Min & Technol Beijing, Comp Sci & Technol, Beijing 100083, Peoples R China
[6] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
基金
中国国家自然科学基金; 国家高技术研究发展计划(863计划);
关键词
Data models; Adaptation models; Object detection; Uncertainty; Noise measurement; Feature extraction; Robustness; Autonomous driving; multimodal fusion; noise; object detection; NETWORK; LINE;
D O I
10.1109/TNNLS.2023.3270159
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Noise has always been nonnegligible trouble in object detection by creating confusion in model reasoning, thereby reducing the informativeness of the data. It can lead to inaccurate recognition due to the shift in the observed pattern, that requires a robust generalization of the models. To implement a general vision model, we need to develop deep learning models that can adaptively select valid information from multimodal data. This is mainly based on two reasons. Multimodal learning can break through the inherent defects of single-modal data, and adaptive information selection can reduce chaos in multimodal data. To tackle this problem, we propose a universal uncertainty-aware multimodal fusion model. It adopts a multipipeline loosely coupled architecture to combine the features and results from point clouds and images. To quantify the correlation in multimodal information, we model the uncertainty, as the inverse of data information, in different modalities and embed it in the bounding box generation. In this way, our model reduces the randomness in fusion and generates reliable output. Moreover, we conducted a completed investigation on the KITTI 2-D object detection dataset and its derived dirty data. Our fusion model is proven to resist severe noise interference like Gaussian, motion blur, and frost, with only slight degradation. The experiment results demonstrate the benefits of our adaptive fusion. Our analysis on the robustness of multimodal fusion will provide further insights for future research.
引用
收藏
页码:13561 / 13573
页数:13
相关论文
共 55 条
[11]   Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges [J].
Feng, Di ;
Haase-Schutz, Christian ;
Rosenbaum, Lars ;
Hertlein, Heinz ;
Glaser, Claudius ;
Timm, Fabian ;
Wiesbeck, Werner ;
Dietmayer, Klaus .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (03) :1341-1360
[12]  
Feng D, 2019, IEEE INT VEH SYM, P1280, DOI [10.1109/ivs.2019.8814046, 10.1109/IVS.2019.8814046]
[13]  
Feng D, 2018, IEEE INT C INTELL TR, P3266, DOI 10.1109/ITSC.2018.8569814
[14]  
Gal Y., 2016, UNCERTAINTY DEEP LEA, V1, P4
[15]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[16]   Bounding Box Regression with Uncertainty for Accurate Object Detection [J].
He, Yihui ;
Zhu, Chenchen ;
Wang, Jianren ;
Savvides, Marios ;
Zhang, Xiangyu .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2883-2892
[17]   Object Detection Under Rainy Conditions for Autonomous Vehicles: A Review of State-of-the-Art and Emerging Techniques [J].
Hnewa, Mazin ;
Radha, Hayder .
IEEE SIGNAL PROCESSING MAGAZINE, 2021, 38 (01) :53-67
[18]  
Kendall A., 2017, P NIPS, P1
[19]   A Robust Learning Approach to Domain Adaptive Object Detection [J].
Khodabandeh, Mehran ;
Vahdat, Arash ;
Ranjbar, Mani ;
Macready, William G. .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :480-490
[20]  
Kim J, 2018, TENCON IEEE REGION, P0090, DOI 10.1109/TENCON.2018.8650166