Vehicle fusion detection in visible and infrared thermal images via spare network and dynamic weight coefficient-based Dempster-Shafer evidence theory

被引:2
作者
Zhang, Xunxun [1 ]
Peng, Lang [1 ]
Lu, Xiaoyu [1 ]
机构
[1] Xian Univ Architecture & Technol, Sch Civil Engn, Xian, Peoples R China
基金
美国国家科学基金会;
关键词
vehicle fusion detection; visible and infrared thermal images; sparse network; dynamic weight coefficient; Dempster-Shafer evidence theory; GRAPH CONVOLUTIONAL NETWORKS; TRACKING;
D O I
10.1117/1.JRS.16.036519
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Recently, visible and infrared thermal (RGB-T) images have drawn wide publicity for vehicle fusion detection in traffic monitoring because of their strong complementarities. How to fully use RGB-T images for vehicle fusion detection has generated enormous publicity. However, the infrared thermal dataset is relatively insufficient. Moreover, the most important requirements of vehicle fusion detection are accuracy, fast speed, and flexibility. Therefore, to address these difficulties, we propose a concise and flexible vehicle fusion detection method in RGB-T images via spare network and dynamic weight coefficient-based Dempster-Shafer (D-S) evidence theory. It combines the detection results of RGB-T images based on a decision-level vehicle fusion strategy. In this work, we focus on vehicle detection using infrared thermal images and the vehicle fusion detection strategy. For the former, we construct an applicable network for vehicle detection in infrared thermal images with sparse parameters (weights) and high generalization ability. For the latter, a fusion strategy via dynamic weight coefficient-based D-S evidence theory is proposed to fuse the two detection results of the RGB-T images. In the vehicle fusion detection strategy, we do not directly fuse the two detection results but judge the detection accuracy in advance. Finally, we introduce the VIVID, VOT2019, and RGBT234 datasets to verify the proposed vehicle fusion detection method. The vehicle fusion detection results show that the proposed method presents superior results compared with several mainstream approaches. (C) 2022 Society of Photo-Optical Instrumentation Engineers (SPIE)
引用
收藏
页数:17
相关论文
共 37 条
[1]  
[Anonymous], 2005, IEEE INT WORKSH PERF
[2]  
[Anonymous], 2015, IEEE INT C COMPUT VI, DOI DOI 10.1109/ICCV.2015.169
[3]   Multi-focus Image Fusion using Neutrosophic based Wavelet Transform [J].
Bhat, Shiveta ;
Koundal, Deepika .
APPLIED SOFT COMPUTING, 2021, 106
[4]   EF-Net: A novel enhancement and fusion network for RGB-D saliency detection [J].
Chen, Qian ;
Fu, Keren ;
Liu, Ze ;
Chen, Geng ;
Du, Hongwei ;
Qiu, Bensheng ;
Shao, Ling .
PATTERN RECOGNITION, 2021, 112
[5]  
Cheng Li, 2021, IOP Conference Series: Earth and Environmental Science, V714, DOI 10.1088/1755-1315/714/4/042045
[6]  
Cong T., 2019, INFR LASER ENG, V48
[7]   Data augmentation for thermal infrared object detection with cascade pyramid generative adversarial network [J].
Dai, Xuerui ;
Yuan, Xue ;
Wei, Xueye .
APPLIED INTELLIGENCE, 2022, 52 (01) :967-981
[8]  
Farhadi A, 2018, Darknet framework for YOLOv3
[9]   Fusion of thermal infrared and visible spectra for robust moving object detection [J].
Fendri, Emna ;
Boukhriss, Rania Rebai ;
Hammami, Mohamed .
PATTERN ANALYSIS AND APPLICATIONS, 2017, 20 (04) :907-926
[10]   Improving the Performance of Infrared and Visible Image Fusion Based on Latent Low-Rank Representation Nested With Rolling Guided Image Filtering [J].
Gao, Ce ;
Song, Congcong ;
Zhang, Yanchao ;
Qi, Donghao ;
Yu, Yi .
IEEE ACCESS, 2021, 9 :91462-91475