Infrared and visible image fusion based on VPDE model and VGG network

被引:0
作者
Donghua Luo
Gang Liu
Durga Prasad Bavirisetti
Yisheng Cao
机构
[1] Shanghai University of Electric Power,
[2] Norwegian University of Science and Technology,undefined
来源
Applied Intelligence | 2023年 / 53卷
关键词
Variational partial differential equation; Expectation-maximization algorithm; VGG network; Infrared and visible image fusion;
D O I
暂无
中图分类号
学科分类号
摘要
Infrared (IR) and visible (VIS) image fusion techniques are widely applied to many high-level vision tasks, such as object detection, recognition, and tracking. However, most existing image fusion algorithms exhibit varying degrees of edge-step effect and texture information degradation in their fused images. To improve the fusion quality, an IR and VIS image fusion method based on a variational partial differential equation (VPDE) model and a VGG network is proposed. A productive smoothing segmentation is integrated into the energy function of the VPDE model, which is based on a novel regularization function. To decompose source images into low-frequency and high-frequency components, the new VPDE model is employed. To fuse low-frequency components, a probabilistic parameter model based on space-alternating generalized expectation-maximization (SAGE) is utilized rather than the traditional average fusion rule. Then, multi-layer features of the high-frequency components are extracted using a VGG network. To generate several candidates of the fused detail content, the l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_1$$\end{document}-norm and weighted average rule are adopted, and the final details are obtained by using the maximum selection strategy. Finally, fused images are obtained by reconstructing the fused low-frequency and high-frequency components. Extensive experiments on the TNO and RoadScene datasets demonstrate that the proposed technique effectively eliminates artifacts as well as the step effect. In the subjective comparison, the proposed method can highlight the salient objects of the fused images while strengthening the texture information. In terms of the evaluation metrics, the proposed method outperforms 13 state-of-the-art methods in objective comparison in addition to the subjective evaluation.
引用
收藏
页码:24739 / 24764
页数:25
相关论文
共 168 条
  • [1] Ma J(2019)Infrared and visible image fusion methods and applications: A survey Information Fusion 45 153-178
  • [2] Ma Y(2020)Ifcnn: A general image fusion framework based on convolutional neural network Information Fusion 54 99-118
  • [3] Li C(2022)A feature level image fusion for ir and visible image using mnmra based segmentation Neural Comput Appl 34 8137-8154
  • [4] Zhang Y(2022)Learn to search a lightweight architecture for target-aware infrared and visible image fusion IEEE Signal Process Lett 29 1614-1618
  • [5] Liu Y(2022)Fast detection fusion network (fdfnet): An end to end object detection framework based on heterogeneous image fusion for power facility inspection IEEE Trans Power Delivery 37 4496-4505
  • [6] Sun P(2021)Image fusion meets deep learning: A survey and perspective Information Fusion 76 323-336
  • [7] Yan H(2022)Dcdr-gan: A densely connected disentangled representation generative adversarial network for infrared and visible image fusion IEEE Trans Circuits Syst Video Technol 73 72-86
  • [8] Zhao X(2021)Rfn-nest: An end-to-end residual fusion network for infrared and visible images Information Fusion 508 64-78
  • [9] Zhang L(2020)Infrared and visible image fusion based on target-enhanced multiscale transform decomposition Inf Sci 51 5610-5621
  • [10] Singh S(2021)Adaptive fusion with multi-scale features for interactive image segmentation Appl Intell 51 4453-4469