End-to-End Infrared and Visible Image Fusion Method Based on GhostNet

被引:0
|
作者
Cheng C. [1 ]
Wu X. [1 ]
Xu T. [1 ]
机构
[1] Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi
基金
中国国家自然科学基金;
关键词
Deep Learning; Image Fusion; Infrared Image; Visible Image;
D O I
10.16451/j.cnki.issn1003-6059.202111006
中图分类号
学科分类号
摘要
Most of the existing deep learning based infrared and visible image fusion methods are grounded on manual fusion strategies in the fusion layer. Therefore, they are incapable of designing an appropriate fusion strategy for the specific image fusion task. To overcome this problem, an end-to-end infrared and visible image fusion method based on GhostNet is proposed. The Ghost module is employed to replace the ordinary convolution layer in the network architecture, and thus it becomes a lightweight model. The constraint of the loss function makes the network learn the adaptive image features for the fusion task, and consequently the feature extraction and fusion are accomplished at the same time. In addition, the perceptual loss is introduced into the design of the loss function. The deep semantic information of source images is utilized in the image fusion as well. Source images are concatenated in the channel dimension and then fed into the deep network.A densely connected encoder is applied to extract deep features of source images. The fusion result is obtained through the reconstruction of the decoder. Experiments show that the proposed method is superior in subjective comparison and objective image quality evaluation metrics. © 2021, Science Press. All right reserved.
引用
收藏
页码:1028 / 1037
页数:9
相关论文
共 26 条
  • [1] LI S T, KANG X D, FANG L Y, Et al., Pixel-Level Image Fusion: A Survey of the State of the Art, Information Fusion, 33, pp. 100-112, (2017)
  • [2] ZHANG X C., Deep Learning-Based Multi-focus Image Fusion: A Survey and a Comparative Study, IEEE Transactions on Pattern Analysis and Machine Intelligence, (2021)
  • [3] ZHANG H, XU H, TIAN X, Et al., Image Fusion Meets Deep Learning: A Survey and Perspective, Information Fusion, 76, pp. 323-336, (2021)
  • [4] LUO L, YUAN Z, WANG K., A Remote Sensing Image Fusion Algorithm Based on àtrous-Contourlet Transformation, Pattern Reco-gnition and Artificial Intelligence, 20, 2, pp. 248-253, (2007)
  • [5] BAVIRISETTI D P, DHULI R., Two-Scale Image Fusion of Visible and Infrared Images Using Saliency Detection, Infrared Physics and Technology, 76, pp. 52-64, (2016)
  • [6] WANG L, LI B, TIAN L F., EGGDD: An Explicit Dependency Model for Multi-modal Medical Image Fusion in Shift-Invariant Shearlet Transform Domain, Information Fusion, 19, pp. 29-37, (2014)
  • [7] LIU Z, SONG Q Y, CHEN J M, Et al., A Multi-resolution Medical Image Fusion Method Based on Nonparametric Orthogonal Polyno-mials, Pattern Recognition and Artificial Intelligence, 25, 2, pp. 300-304, (2012)
  • [8] LIU C H, QI Y, DING W R, Et al., Infrared and Visible Image Fusion Method Based on Saliency Detection in Sparse Domain, Infrared Physics and Technology, 83, pp. 94-102, (2017)
  • [9] LI H, WU X J., Multi-focus Image Fusion Using Dictionary Learning and Low-Rank Representation, Proc of the International Confe-rence on Image and Graphics, pp. 675-686, (2017)
  • [10] LIU Y, CHEN X, PENG H, Et al., Multi-focus Image Fusion with a Deep Convolutional Neural Network, Information Fusion, 36, pp. 191-207, (2017)