An Improved Hybrid Network With a Transformer Module for Medical Image Fusion

被引:12
|
作者
Liu, Yanyu
Zang, Yongsheng [1 ]
Zhou, Dongming [1 ]
Cao, Jinde [2 ,3 ]
Nie, Rencan [1 ]
Hou, Ruichao [4 ]
Ding, Zhaisheng [1 ]
Mei, Jiatian [1 ]
机构
[1] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650504, Yunnan, Peoples R China
[2] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
[3] Yonsei Univ, Yonsei Frontier Lab, Seoul 03722, South Korea
[4] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210023, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; transformer; self-adaptive weight fusion; self-reconstruction; INFORMATION; PERFORMANCE; FRAMEWORK; PROTEIN; NEST;
D O I
10.1109/JBHI.2023.3264819
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Medical image fusion technology is an essential component of computer-aided diagnosis, which aims to extract useful cross-modality cues from raw signals to generate high-quality fused images. Many advanced methods focus on designing fusion rules, but there is still room for improvement in cross-modal information extraction. To this end, we propose a novel encoder-decoder architecture with three technical novelties. First, we divide the medical images into two attributes, namely pixel intensity distribution attributes and texture attributes, and thus design two self-reconstruction tasks to mine as many specific features as possible. Second, we propose a hybrid network combining a CNN and a transformer module to model both long-range and short-range dependencies. Moreover, we construct a self-adaptive weight fusion rule that automatically measures salient features. Extensive experiments on a public medical image dataset and other multimodal datasets show that the proposed method achieves satisfactory performance.
引用
收藏
页码:3489 / 3500
页数:12
相关论文
共 50 条
  • [1] MATR: Multimodal Medical Image Fusion via Multiscale Adaptive Transformer
    Tang, Wei
    He, Fazhi
    Liu, Yu
    Duan, Yansong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 5134 - 5149
  • [2] An end-to-end medical image fusion network based on Swin-transformer
    Yu, Kaixin
    Yang, Xiaoming
    Jeon, Seunggil
    Dou, Qingyu
    MICROPROCESSORS AND MICROSYSTEMS, 2023, 98
  • [3] THFuse: An infrared and visible image fusion network using transformer and hybrid feature extractor
    Chen, Jun
    Ding, Jianfeng
    Yu, Yang
    Gong, Wenping
    NEUROCOMPUTING, 2023, 527 : 71 - 82
  • [4] A Dual Cross Attention Transformer Network for Infrared and Visible Image Fusion
    Zhou, Zhuozhi
    Lan, Jinhui
    2024 7TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND BIG DATA, ICAIBD 2024, 2024, : 494 - 499
  • [5] HDCCT: Hybrid Densely Connected CNN and Transformer for Infrared and Visible Image Fusion
    Li, Xue
    He, Hui
    Shi, Jin
    ELECTRONICS, 2024, 13 (17)
  • [6] FATFusion: A functional-anatomical transformer for medical image fusion
    Tang, Wei
    He, Fazhi
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (04)
  • [7] HDCTfusion: Hybrid Dual-Branch Network Based on CNN and Transformer for Infrared and Visible Image Fusion
    Wang, Wenqing
    Li, Lingzhou
    Yang, Yifei
    Liu, Han
    Guo, Runyuan
    SENSORS, 2024, 24 (23)
  • [8] IMAGE FUSION TRANSFORMER
    Vibashan, V. S.
    Valanarasu, Jeya Maria Jose
    Oza, Poojan
    Patel, Vishal M.
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3566 - 3570
  • [9] DesTrans: A medical image fusion method based on Transformer and improved DenseNet
    Song Y.
    Dai Y.
    Liu W.
    Liu Y.
    Liu X.
    Yu Q.
    Liu X.
    Que N.
    Li M.
    Computers in Biology and Medicine, 2024, 174
  • [10] Multi-feature decomposition and transformer-fusion: an infrared and visible image fusion network based on multi-feature decomposition and transformer
    Li, Xujun
    Duan, Zhicheng
    Chang, Jia
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (06)