MIFFuse: A Multi-Level Feature Fusion Network for Infrared and Visible Images

被引:15
作者
Zhu D. [1 ]
Zhan W. [1 ]
Jiang Y. [1 ]
Xu X. [1 ]
Guo R. [1 ]
机构
[1] National Demonstration Center for Experimental Electrical and Electronic Technology, Changchun University of Science and Technology, Jilin
关键词
concatenation block; End-to-end framework; image fusion; Multi-level features;
D O I
10.1109/ACCESS.2021.3111905
中图分类号
学科分类号
摘要
Image fusion operation is beneficial to many applications and is also one of the most common and critical computer vision challenges. The perfect infrared and visible image fusion results should include the important infrared targets while preserving visible textural detail information as much as possible. A novel infrared and visible image fusion framework is proposed for this purpose. In this paper, the proposed fusion network (MIFFuse) is an end-to-end, multi-level-based fusion network for infrared and visible images. The presented approach makes effective use of the intermediate convolution layer's output features to preserve the primary image fusion information. We also build a cat_block to swap information between two paths to gain more sufficient information during the convolution steps. To reduce the model's running time even further, the proposed method that reduces the number of feature channels while maintaining the accuracy of the fusion performance. Extensive experiments on the TNO and CVC-14 image fusion datasets show that our MIFFuse outperforms the other methods in terms of both subjective visual effects and quantitative metrics. Furthermore, MIFFuse is approximately twice as fast as the most recent state-of-the-art methods. © 2013 IEEE.
引用
收藏
页码:130778 / 130792
页数:14
相关论文
共 50 条
[1]  
Ma J., Ma Y., Li C., Infrared and visible image fusion methods and applications: A survey, Inf. Fusion, 45, pp. 153-178, (2019)
[2]  
Huang X., Zhang B., Zhang X., Tang M., Miao Q., Li T., Jia G., Application of U-net based multiparameter magnetic resonance image fusion in the diagnosis of prostate cancer, IEEE Access, 9, pp. 33756-33768, (2021)
[3]  
Li B., Peng H., Luo X., Wang J., Song X., Perez-Jimenez M.J., Riscos-Nunez A., Medical image fusion method based on coupled neural P systems in nonsubsampled shearlet transform domain, Int. J. Neural Syst., 31, 1, (2021)
[4]  
Zhang H., Le Z., Shao Z., Xu H., Ma J., MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion, Inf. Fusion, 66, pp. 40-53, (2021)
[5]  
Wei B., Feng X., Wang W., 3M: A multi-scale and multidirectional method for multi-focus image fusion, IEEE Access, 9, pp. 48531-48543, (2021)
[6]  
Liu Y., Chen X., Peng H., Wang Z.F., Multi-focus image fusion with a deep convolutional neural network, Inf. Fusion, 36, pp. 191-207, (2017)
[7]  
Hartzell P., Glennie C., Khan S., Terrestrial hyperspectral image shadow restoration through lidar fusion, Remote Sens, 9, 5, (2017)
[8]  
Li H., Ding W., Cao X., Liu C., Image registration and fusion of visible and infrared integrated camera for medium-altitude unmanned aerial vehicle remote sensing, Remote Sens, 9, 5, (2017)
[9]  
Palsson F., Sveinsson J., Ulfarsson M., Sentinel-2 image fusion using a deep residual network, Remote Sens, 10, 8, (2018)
[10]  
Liu Y., Dong L., Chen Y., Xu W., An efficient method for infrared and visual images fusion based on visual attention technique, Remote Sens, 12, 5, (2020)