CEFusion: Multi-Modal medical image fusion via cross encoder

被引:0
作者
Zhu, Ya [1 ]
Wang, Xue [1 ]
Chen, Luping [1 ]
Nie, Rencan [1 ]
机构
[1] School of Information Science and Engineering, Yunnan University, Kunming,650500, China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Deep learning - Image fusion - Image texture - Medical imaging - Textures;
D O I
暂无
中图分类号
学科分类号
摘要
Most existing deep learning-based multi-modal medical image fusion (MMIF) methods utilize single-branch feature extraction strategies to achieve good fusion performance. However, for MMIF tasks, it is thought that this structure cuts off the internal connections between source images, resulting in information redundancy and degradation of fusion performance. To this end, this paper proposes a novel unsupervised network, termed CEFusion. Different from existing architecture, a cross-encoder is designed by exploiting the complementary properties between the original image to refine source features through feature interaction and reuse. Furthermore, to force the network to learn complementary information between source images and generate the fused image with high contrast and rich textures, a hybrid loss is proposed consisting of weighted fidelity and gradient losses. Specifically, the weighted fidelity loss can not only force the fusion results to approximate the source images but also effectively preserve the luminance information of the source image through weight estimation, while the gradient loss preserves the texture information of the source image. Experimental results demonstrate the superiority of the method over the state-of-the-art in terms of subjective visual effect and quantitative metrics in various datasets. © 2022 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
引用
收藏
页码:3177 / 3189
相关论文
empty
未找到相关数据