Infrared and visible image fusion based on GEMD and improved PCNN

被引:0
|
作者
Yang Y. [1 ]
Li X. [1 ]
Dang J. [1 ]
Wang Y. [1 ]
机构
[1] School of Electronic and Information Engineering, Lan zhou Jiaotong University, Lan zhou
基金
中国国家自然科学基金;
关键词
bilateral filter; gradient filter; image fusion; infrared and visible image; pulse coupled neural network;
D O I
10.13700/j.bh.1001-5965.2022.0756
中图分类号
学科分类号
摘要
Because of the limitations of decomposition tools in traditional image fusion methods, artifacts, decreased brightness, and contrast appear on fused image edges. A gradient edge-preserving multi-level decomposition(GEMD)-based approach for infrared and visible image fusion, as well as an enhanced pulse-coupled neural network(PCNN), are described. The gradient bilateral filter(GBF) is proposed based on the bilateral filter and the gradient filter(GF), which can preserve the edge structure, brightness, and contrast information while smoothing the detail information. Firstly, the source images are divided into three layers of feature maps and a base layer, and then a multi-level decomposition model is built using a gradient bilateral filter and gradient filter. Each layer of feature maps has two different structures, thin and thick. Then, according to the characteristics of the information contained in each feature map, the PCNN, which introduces an improved Laplacian operator in the input stimulus to enhance weak details captured in the image, regional energy, and contrast saliency are respectively adopted to obtain the fusion images of each sub-feature and the fusion image of the base layer as the fusion rules. Finally, the sub-fusion images are superimposed to obtain the final fusion image. Through experimental verification, the proposed method has improved both visual effect and quantitative evaluation and improved the brightness and contrast information of infrared and visible fusion images. © 2023 Beijing University of Aeronautics and Astronautics (BUAA). All rights reserved.
引用
收藏
页码:2317 / 2329
页数:12
相关论文
共 21 条
  • [1] SHEN Y, CHEN X P., Infrared and visible image fusion based on latent low-rank representation decomposition and VGG Net, Journal of Beijing University of Aeronautics and Astronautics, 47, 6, pp. 1105-1114, (2021)
  • [2] LI G F, LIN Y J, QU X D., An infrared and visible image fusion method based on multi-scale transformation and norm optimization, Information Fusion, 71, pp. 109-129, (2021)
  • [3] ZHU W Q, TANG X Y, ZHANG R, Et al., Infrared and visible image fusion based on edge-preserving and attention generative adversarial network, Journal of Infrared and Millimeter Waves, 40, 5, pp. 696-708, (2021)
  • [4] LIU J N, JIN W Q, LI L, Et al., Visible and infrared thermal image fusion algorithm based on self-adaptive reference image, Spectroscopy and Spectral Analysis, 36, 12, pp. 3907-3914, (2016)
  • [5] LUO Y Y, HE K J, XU D, Et al., Infrared and visible image fusion based on visibility enhancement and hybrid multiscale decomposition, Optik, 258, (2022)
  • [6] GUO Z Y, YU X T, DU Q L., Infrared and visible image fusion based on saliency and fast guided filtering, Infrared Physics & Technology, 123, (2022)
  • [7] MA J Y, ZHOU Y., Infrared and visible image fusion via gradient-let filter, Computer Vision and Image Understanding, 197-198, (2020)
  • [8] WANG M L, WANG X L, ZHANG C S., Infrared and visible image fusion algorithm based on dynamic range compression enhancement and NSST, Acta Photonica Sinica, 51, 9, pp. 277-291, (2022)
  • [9] ZHOU Z Q, WANG B, LI S, Et al., Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters, Information Fusion, 30, pp. 15-26, (2016)
  • [10] LI X S, ZHOU F Q, TAN H S, Et al., Multimodal medical image fusion based on joint bilateral filter and local gradient energy, Information Sciences, 569, pp. 302-325, (2021)