SIE: infrared and visible image fusion based on scene information embedding

被引:0
作者
Geng Y. [1 ]
Diao W. [1 ,2 ]
Zhao Y. [3 ]
机构
[1] Electrical and Information Engineering, Changchun Institute of Technology, Kuanping Road, Jilin, Changchun
[2] Information and Control Engineering, Jilin Institute of Chemical Technology, Chengde Street, Jilin, Jilin
[3] College of Communication Engineering, Jilin University, Qianjin Street, Jilin, Changchun
关键词
Convolutional Neural Network (CNN); Infrared and visible image fusion; Multi-scale fusion; Scene information;
D O I
10.1007/s11042-024-19105-y
中图分类号
学科分类号
摘要
In this article, we proposed an infrared and visible image fusion method based on scene information embedding, which is to obtain an fused image with high-quality target information such as pedestrian crossing, vehicles, signs and rich background texture information. First, scene information is described, so as to effectively represent highlight salient target for infrared image and texture detail for visible image. Second, we embed this scene information into multi-scale fusion framework to fuse these information and reconstruct the desired fused image. It is worth noting that scene information is constructed by pre-trained convolutional neural network, where not only information extracted by convolutional neural network is fully utilized, but also training complexity is not increased. In datasets LLVIP, MSRS, M3FD,TNO, RoadScene and elctricity1, the superiority of our proposed algorithm SIE over state-of-the-arts is demonstrated by subjective evaluation and objective evaluation including information entropy (EN), standard deviation (SD), spatial frequency (SF), average gradient (AG), sum of the correlations of differences (SCD) and mutual Information (MI) metrics. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
引用
收藏
页码:1463 / 1488
页数:25
相关论文
共 59 条
  • [1] Wang Y., Wei X., Tang X., Shen H., Zhang H., Adaptive fusion cnn features for RGBT object tracking, IEEE Trans Intel Trans Syst, 23, 7, pp. 7831-7840, (2022)
  • [2] Dan D., Ying Y., Ge L., Digital twin system of bridges group based on machine vision fusion monitoring of bridge traffic load, IEEE Trans Intel Trans Syst, 23, 11, pp. 22190-22205, (2022)
  • [3] Gao M., Wang J., Chen Y., Et al., An improved multi-exposure image fusion method for intelligent transportation system, Electronics, 10, 4, (2021)
  • [4] Ma J., Yu W., Liang P., Li C., Jiang J., FusionGAN: A generative adversarial network for infrared and visible image fusion, Inf Fusion, 44, pp. 11-26, (2019)
  • [5] Geng Y., Diao W., Zhao Y., (2022) Infrared-RGB Image Registration for Power Thermal Fault Detection Based on Gradient Hash Matching, Proc. 22Th IEEE Conf. Commun. Technol (ICCT), pp. 1732-1735, (2022)
  • [6] Li H., Wu X., DenseFuse: A fusion approach to infrared and visible images, IEEE Trans Image Process, 28, 5, pp. 2614-2623, (2019)
  • [7] Ma J., Xu H., Jiang J., Mei X., Zhang X., Ddcgan: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion, IEEE Trans Image Process, 29, pp. 4980-4995, (2020)
  • [8] Xu H., Ma J., Jiang J., Guo X., Liang H., U2Fusion: A unified unsupervised image fusion network, IEEE Trans Pattern Anal Mach Intell, 44, 1, pp. 502-518, (2022)
  • [9] Xu H., Wang X., Ma J., DRF: Disentangled representation for visible and infrared image fusion, IEEE Trans Instrum Meas, 70, (2021)
  • [10] Ma J., Et al., Infrared and visible image fusion via detail preserving adversarial learning, Inf Fusion, 54, pp. 85-98, (2020)