CoMoFusion: Fast and High-Quality Fusion of Infrared and Visible Image with Consistency Model

被引:0
作者
Meng, Zhiming [1 ]
Li, Hui [1 ]
Zhang, Zeyang [1 ]
Shen, Zhongwei [2 ]
Yu, Yunlong [3 ]
Song, Xiaoning [1 ]
Wu, Xiaojun [1 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Int Joint Lab Artificial Intelligence Jiangsu Pro, Wuxi 214122, Peoples R China
[2] Suzhou Univ Sci & Technol, Sch Elect & Informat Engn, Suzhou, Peoples R China
[3] Zhejiang Univ, Coll Informat Sci & Elect Engn, Hangzhou, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT VIII | 2025年 / 15038卷
基金
中国国家自然科学基金;
关键词
Image fusion; Multi-modal information; Consistency model; Diffusion; NETWORK;
D O I
10.1007/978-981-97-8685-5_38
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generative models are widely utilized to model the distribution of fused images in the field of infrared and visible image fusion. However, current generative models based fusion methods often suffer from unstable training and slow inference speed. To tackle this problem, a novel fusion method based on consistency model is proposed, termed as CoMoFusion, which can generate high-quality images and achieve fast image inference speed. In specific, consistency model is used to construct multi-modal joint features in the latent space with the forward and reverse process. Then, the infrared and visible features extracted by the trained consistency model are fed into fusion module to generate the final fused image. In order to enhance the texture and salient information of fused images, a novel loss based on pixel value selection is also designed. Extensive experiments on public datasets illustrate that our method obtains the SOTA fusion performance compared with the existing fusion methods. The code is available at https://github.com/ZhimingMeng/CoMoFusion.
引用
收藏
页码:539 / 553
页数:15
相关论文
共 31 条
  • [1] [Anonymous], 2020, IEEE T INSTRUM MEAS, V70, P1, DOI DOI 10.1109/TIM.2020.3038013
  • [2] Bochkovskiy A, 2020, Arxiv, DOI [arXiv:2004.10934, 10.48550/arXiv.2004.10934, DOI 10.48550/ARXIV.2004.10934]
  • [3] Chen NX, 2020, Arxiv, DOI [arXiv:2009.00713, DOI 10.48550/ARXIV.2009.00713]
  • [4] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
  • [5] Ho J, 2020, NIPS 20 PROC 34 INT, V33, P6840
  • [6] Hwang S, 2015, PROC CVPR IEEE, P1037, DOI 10.1109/CVPR.2015.7298706
  • [7] RFN-Nest: An end-to-end residual fusion network for infrared and visible images
    Li, Hui
    Wu, Xiao-Jun
    Kittler, Josef
    [J]. INFORMATION FUSION, 2021, 73 : 72 - 86
  • [8] DenseFuse: A Fusion Approach to Infrared and Visible Images
    Li, Hui
    Wu, Xiao-Jun
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (05) : 2614 - 2623
  • [9] Fusion from Decomposition: A Self-Supervised Decomposition Approach for Image Fusion
    Liang, Pengwei
    Jiang, Junjun
    Liu, Xianming
    Ma, Jiayi
    [J]. COMPUTER VISION - ECCV 2022, PT XVIII, 2022, 13678 : 719 - 735
  • [10] RePaint: Inpainting using Denoising Diffusion Probabilistic Models
    Lugmayr, Andreas
    Danelljan, Martin
    Romero, Andres
    Yu, Fisher
    Timofte, Radu
    Van Gool, Luc
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11451 - 11461