DesTrans: A medical image fusion method based on Transformer and improved DenseNet

被引:2
作者
Song Y. [1 ]
Dai Y. [1 ,2 ]
Liu W. [1 ]
Liu Y. [1 ]
Liu X. [1 ]
Yu Q. [1 ]
Liu X. [1 ]
Que N. [1 ]
Li M. [1 ,2 ]
机构
[1] College of Medicine and Biological Information Engineering, Northeastern University, Shenyang
[2] Engineering Center on Medical Imaging and Intelligent Analysis, Ministry Education, Northeastern University, Shenyang
关键词
Convolutional neural network; Medical image fusion; Transformer;
D O I
10.1016/j.compbiomed.2024.108463
中图分类号
学科分类号
摘要
Medical image fusion can provide doctors with more detailed data and thus improve the accuracy of disease diagnosis. In recent years, deep learning has been widely used in the field of medical image fusion. The traditional method of medical image fusion is to operate by superimposing and other methods of pixels. The introduction of deep learning methods has improved the effectiveness of medical image fusion. However, these methods still have problems such as edge blurring and information redundancy. In this paper, we propose a deep learning network model based on Transformer and an improved DenseNet network module integration that can be applied to medical images and solve the above problems. At the same time, the method can be moved to natural images. The use of Transformer and dense concatenation enhances the feature extraction capability of the method by limiting the feature loss which reduces the risk of edge blurring. We compared several representative traditional methods and more advanced deep learning methods with this method. The experimental results show that the Transformer and the improved DenseNet network module have a strong capability of feature extraction. The method yields good results both in terms of visual quality and objective image evaluation metrics. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 46 条
  • [1] Zhang H., Xu H., Tian X., Jiang J., Ma J., Image fusion meets deep learning: A survey and perspective, Inf. Fusion, 76, 11, (2021)
  • [2] Xu H., Ma J., Jiang J., Guo X., Ling H., U2Fusion: A unified unsupervised image fusion network, IEEE Trans. Pattern Anal. Mach. Intell., (2020)
  • [3] Yang Y., Cao S., Huang S., Wan W., Multimodal medical image fusion based on weighted local energy matching measurement and improved spatial frequency, IEEE Trans. Instrum. Meas., 70-, (2021)
  • [4] Omar Z., Stathaki T., Image Fusion: An Overview, (2014)
  • [5] Li Y., Zhao J., Lv Z., Li J., Medical image fusion method by deep learning, Int. J. Cogn. Comput. Eng., (2021)
  • [6] Jiao D., Li W., Ke L., Xiao B., An overview of multi-modal medical image fusion, Neurocomputing, 215, pp. 3-20, (2016)
  • [7] Faragallah O.S., Elhoseny H.M., El-Shafai W., El-Rahman W.A., Geweid G., A comprehensive survey analysis for present solutions of medical image fusion and future directions, IEEE Access, 9, pp. 11358-11371, (2021)
  • [8] Zhu Z., Zheng M., Qi G., Wang D., Xiang Y., A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain, IEEE Access, (2019)
  • [9] Yang B., Jing Z., Zhao H., Review of pixel-level image fusion, J. Shanghai Jiaotong Univ. (Sci.), 15, pp. 6-12, (2010)
  • [10] Yu L.A., Lei W.A., Jc A., Chang L.A., Xun C.B., Multi-focus image fusion: A survey of the state of the art, Inf. Fusion, 64, pp. 71-91, (2020)