An infrared and visible image fusion method based on deep convolutional feature extraction

被引:0
作者
Pang Z.-X. [1 ,2 ]
Liu G.-H. [1 ,2 ]
Chen C.-M. [1 ,2 ]
Liu H.-T. [1 ,2 ]
机构
[1] College of Information Engineering, Southwest University of Science and Technology, Mianyang
[2] Robot Technology Used for Special Environment Laboratory of Sichuan Province, Mianyang
来源
Kongzhi yu Juece/Control and Decision | 2024年 / 39卷 / 03期
关键词
EfficientNet; image fusion; infrared image; transfer learning; visible image;
D O I
10.13195/j.kzyjc.2022.0704
中图分类号
学科分类号
摘要
Most current infrared and visible image fusion approaches require decomposition of the source images during the fusion process, which will lead to blurred details and loss of saliency targets. In order to solve this problem, an infrared and visible image fusion method with deep convolutional feature extraction is proposed. Firstly, the feature extraction capability of EfficientNet is analysed by using transfer learning to select seven feature extraction modules. Secondly, the source image is fed directly into the feature extraction module to achieve salient feature extraction. After that, the channel normalization and average operator are constructed for obtaining the activity level map. The fusion rule with a combination of Softmax and Up-sampling is used to obtain the fused weights, which are then convolved with the source image to produce seven candidate fused images. Finally, the pixel maximum of the candidate fused images are used as the final reconstructed fused results. Experiments are based on public datasets, and compared with classical traditional and deep learning methods. The subjective and objective results show that the proposed method can effectively integrate the significant information of infrared and visible images and enhance the detail texture of the fused images, providing better visual effects with less image artefacts and artificial noise. © 2024 Northeast University. All rights reserved.
引用
收藏
页码:910 / 918
页数:8
相关论文
共 28 条
  • [1] Min L, Cao S J, Zhao H C, Et al., Infrared and visible image fusion with improved residual dense generative adversarial network, Control and Decision
  • [2] Zhang H, Xu H, Tian X, Et al., Image fusion meets deep learning: A survey and perspective, Information Fusion, 76, pp. 323-336, (2021)
  • [3] Zhou J W, Ren K, Wan M J, Et al., An infrared and visible image fusion method based on VGG-19 network, Optik, 248, (2021)
  • [4] Shen Y, Huang C H, Huang F, Et al., Research progress of infrared and visible image fusion technology, Infrared and Laser Engineering, 50, 9, pp. 152-169, (2021)
  • [5] Burt P, Adelson E., The Laplacian pyramid as a compact image code, IEEE Transactions on Communications, 31, 4, pp. 532-540, (1983)
  • [6] Toet A., Image fusion by a ratio of low-pass pyramid, Pattern Recognition Letters, 9, 4, pp. 245-253, (1989)
  • [7] Selesnick I W, Baraniuk R G, Kingsbury N C., The dual-tree complex wavelet transform, IEEE Signal Processing Magazine, 22, 6, pp. 123-151, (2005)
  • [8] Wei H Y, Zhu Z Q, Chang L, Et al., A novel precise decomposition method for infrared and visible image fusion, Chinese Control Conference, pp. 313-317, (2019)
  • [9] Liu Y, Chen X, Ward R K, Et al., Image fusion with convolutional sparse representation, IEEE Signal Processing Letters, 23, 12, pp. 1882-1886, (2016)
  • [10] Liu Y, Chen X, Cheng J, Et al., Infrared and visible image fusion with convolutional neural networks, International Journal of Wavelets, Multiresolution and Information Processing, 16, 3, (2018)