AVAFN-adaptive variational autoencoder fusion network for multispectral image

被引:0
作者
Chu, Wen-Lin [1 ]
Tu, Ching-Che [2 ]
Jian, Bo-Lin [2 ]
机构
[1] Department of Bio-Industrial Mechatronics Engineering, National Chung Hsing University, Taichung
[2] Department of Electrical Engineering, National Chin-Yi University of Technology, Taichung
关键词
Adaptive loss assignment method; Image fusion; Multi-input image fusion; Variational autoencoder;
D O I
10.1007/s11042-024-20340-6
中图分类号
学科分类号
摘要
Image fusion involves extracting and combining the most meaningful information from images captured by different sensors, aiming to create a single image beneficial for further applications. The development of deep learning has significantly advanced image fusion, with neural networks' powerful feature extraction and reconstruction capabilities broadening the prospects for fused images. This study introduces the Adaptive Variational Autoencoder Image Fusion Network (AVAFN), enhancing the Variational Autoencoder (VAE) architecture by incorporating additional convolutional, ReLU, and Dropout layers to improve feature extraction and prevent overfitting. AVAFN introduces a novel loss function that considers global contrast, feature intensity, and structural texture, reducing feature and contrast loss during fusion and preserving detail texture in line with human visual perception. The importance of loss function weights is evident in AVAFN's Adaptive Loss Assignment Method, which adjusts loss weights based on the different feature information of input images, analyzes various depth features, and obtains optimal fusion outcomes. Experimental results reveal that AVAFN excels in multi-modal and multi-exposure image fusion, achieving scene-appropriate fusion results through color channel transformation and effectively extracting depth features in multi-input image fusion tasks through the Concatenation function. These results, confirmed by comparisons with popular fusion methods across three image databases, demonstrate AVAFN's superiority in subjective visual perception and objective fusion metrics. The experiments validate AVAFN's applicability to various image fusion tasks, achieving the desired fusion outcomes even in low-quality images and showcasing its potential in the image fusion domain. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
引用
收藏
页码:89297 / 89315
页数:18
相关论文
共 31 条
  • [1] Zhang H., Et al., Image fusion meets deep learning: a survey and perspective, Information Fusion, 76, pp. 323-336, (2021)
  • [2] Ma J., Et al., STDFusionNet: an infrared and visible image fusion network based on salient target detection, IEEE Trans Instrum Meas, 70, pp. 1-13, (2021)
  • [3] Ma J., Et al., SMFuse: multi-focus image fusion via self-supervised mask-optimization, IEEE Trans Comput Imaging, 7, pp. 309-320, (2021)
  • [4] Li H., Wu X.-J., Kittler J., RFN-Nest: an end-to-end residual fusion network for infrared and visible images, Information Fusion, 73, pp. 72-86, (2021)
  • [5] Cheng G., Et al., Cross-scale feature fusion for object detection in optical remote sensing images, IEEE Geosci Remote Sens Lett, 18, 3, pp. 431-435, (2021)
  • [6] Jose J., Et al., An image quality enhancement scheme employing adolescent identity search algorithm in the NSST domain for multimodal medical image fusion, Biomed Signal Process Control, 66, (2021)
  • [7] Ma J., Et al., GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion, IEEE Trans Instrum Meas, 70, pp. 1-14, (2021)
  • [8] Wang Z., Et al., Infrared and visible image fusion method using salience detection and convolutional neural network, Sensors (Basel), 22, 14, (2022)
  • [9] Ma J., Et al., FusionGAN: a generative adversarial network for infrared and visible image fusion, Information Fusion, 48, pp. 11-26, (2019)
  • [10] Sun C., Zhang C., Xiong N., Infrared and visible image fusion techniques based on deep learning: a review, Electronics, 9, 12, (2020)