LeGFusion: Locally Enhanced Global Learning for Multimodal Image Fusion

被引:1
作者
Zhang, Jing [1 ]
Liu, Aiping [1 ]
Liu, Yu [2 ]
Qiu, Bensheng [1 ]
Xie, Qingguo [1 ]
Chen, Xun [1 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230027, Peoples R China
[2] Hefei Univ Technol, Dept Biomed Engn, Hefei 230009, Peoples R China
关键词
Image fusion; Transformers; Task analysis; Sensors; Feature extraction; Generative adversarial networks; Biomedical imaging; Locally enhanced global learning; multimodal image fusion (MMIF); transformer; NETWORK; PERFORMANCE; INFORMATION; FRAMEWORK; NEST;
D O I
10.1109/JSEN.2024.3371056
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Multimodal image fusion (MMIF) can provide more comprehensive scene characteristics by synthesizing a single image from multi-sensor images of the same scene, which works out the limitation of single-type hardware. To handle MMIF tasks, current deep learning (DL)-based methods usually use convolutional neural networks (CNNs) or combine transformer to extract local and global contextual information of source images. However, none of the existing works fully explores contextual information both across modalities and within single modalities, leading to limited fusion results. To this end, we propose a new MMIF method via locally enhanced global learning, termed as LeGFusion. Specifically, the network of LeGFusion is devised based on locally enhanced transformer block (LETB), which can capture long-range dependencies benefiting from nonoverlapping window-based self-attention while capturing useful local context with the utilization of the convolution operator into transformer. On one hand, several LETBs are deployed to extract global contexts from each modality while emphasizing its local information. On the other hand, the fusion module that also consists of LETBs is designed to integrate multimodal features by perceiving cross-modal local and global interactions. Powered by these intramodal and intermodal contextual information exploration, the proposed LeGFusion enjoys a high capability in capturing significant complementary information for image fusion. Extensive experiments are conducted on two types of MMIF tasks, including infrared-visible image fusion (IVF) and medical image fusion. The qualitative and quantitative evaluation results demonstrate the superiority of our LeGFusion over state-of-the-art methods. Furthermore, we validate the generalization ability of LeGFusion without fine-tuning and achieve fantastic results.
引用
收藏
页码:12806 / 12818
页数:13
相关论文
共 65 条
[1]   Fusion of Infrared and Visible Sensor Images Based on Anisotropic Diffusion and Karhunen-Loeve Transform [J].
Bavirisetti, Durga Prasad ;
Dhuli, Ravindra .
IEEE SENSORS JOURNAL, 2016, 16 (01) :203-209
[2]   A Fractal Dimension Based Framework for Night Vision Fusion [J].
Bhatnagar, Gaurav ;
Wu, Q. M. Jonathan .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2019, 6 (01) :220-227
[3]   Multi-Sensor Fusion Based on Local Activity Measure [J].
Bhatnagar, Gaurav ;
Liu, Zheng .
IEEE SENSORS JOURNAL, 2017, 17 (22) :7487-7496
[4]   Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain [J].
Bhatnagar, Gaurav ;
Wu, Q. M. Jonathan ;
Liu, Zheng .
IEEE TRANSACTIONS ON MULTIMEDIA, 2013, 15 (05) :1014-1024
[5]   Human visual system inspired multi-modal medical image fusion framework [J].
Bhatnagar, Gaurav ;
Wu, Q. M. Jonathan ;
Liu, Zheng .
EXPERT SYSTEMS WITH APPLICATIONS, 2013, 40 (05) :1708-1720
[6]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[7]   A human perception inspired quality metric for image fusion based on regional information [J].
Chen, Hao ;
Varshney, Pramod K. .
INFORMATION FUSION, 2007, 8 (02) :193-207
[8]  
Dosovitskiy A., 2021, 9 INT C LEARN REPR I
[9]   Anatomical-Functional Image Fusion by Information of Interest in Local Laplacian Filtering Domain [J].
Du, Jiao ;
Li, Weisheng ;
Xiao, Bin .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (12) :5855-5866
[10]  
Fu Y, 2022, Arxiv, DOI [arXiv:2107.13967, 10.48550/arXiv.2107.13967]