MEEAFusion: Multi-Scale Edge Enhancement and Joint Attention Mechanism Based Infrared and Visible Image Fusion

被引:1
作者
Xie, Yingjiang [1 ]
Fei, Zhennan [1 ]
Deng, Da [1 ]
Meng, Lingshuai [1 ]
Niu, Fu [1 ]
Sun, Jinggong [1 ]
机构
[1] PLA, Acad Mil Sci, Syst Engn Inst, Beijing 100166, Peoples R China
关键词
edge enhancement; attention mechanism; image fusion; infrared image; visible image; NETWORK; NEST;
D O I
10.3390/s24175860
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Infrared and visible image fusion can integrate rich edge details and salient infrared targets, resulting in high-quality images suitable for advanced tasks. However, most available algorithms struggle to fully extract detailed features and overlook the interaction of complementary features across different modal images during the feature fusion process. To address this gap, this study presents a novel fusion method based on multi-scale edge enhancement and a joint attention mechanism (MEEAFusion). Initially, convolution kernels of varying scales were utilized to obtain shallow features with multiple receptive fields unique to the source image. Subsequently, a multi-scale gradient residual block (MGRB) was developed to capture the high-level semantic information and low-level edge texture information of the image, enhancing the representation of fine-grained features. Then, the complementary feature between infrared and visible images was defined, and a cross-transfer attention fusion block (CAFB) was devised with joint spatial attention and channel attention to refine the critical supplemental information. This allowed the network to obtain fused features that were rich in both common and complementary information, thus realizing feature interaction and pre-fusion. Lastly, the features were reconstructed to obtain the fused image. Extensive experiments on three benchmark datasets demonstrated that the MEEAFusion proposed in this research has considerable strengths in terms of rich texture details, significant infrared targets, and distinct edge contours, and it achieves superior fusion performance.
引用
收藏
页数:28
相关论文
共 57 条
[31]   DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs [J].
Prabhakar, K. Ram ;
Srikar, V. Sai ;
Babu, R. Venkatesh .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4724-4732
[32]   PIAFusion: A progressive infrared and visible image fusion network based on illumination aware [J].
Tang, Linfeng ;
Yuan, Jiteng ;
Zhang, Hao ;
Jiang, Xingyu ;
Ma, Jiayi .
INFORMATION FUSION, 2022, 83 :79-92
[33]   Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network [J].
Tang, Linfeng ;
Yuan, Jiteng ;
Ma, Jiayi .
INFORMATION FUSION, 2022, 82 :28-42
[34]   DATFuse: Infrared and Visible Image Fusion via Dual Attention Transformer [J].
Tang, Wei ;
He, Fazhi ;
Liu, Yu ;
Duan, Yansong ;
Si, Tongzhen .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (07) :3159-3172
[35]  
Toet A., TNO IMAGE FUSION DAT
[36]  
Venkatanath N, 2015, NATL CONF COMMUN
[37]   An interactively reinforced paradigm for joint infrared-visible image fusion and saliency object detection [J].
Wang, Di ;
Liu, Jinyuan ;
Liu, Risheng ;
Fan, Xin .
INFORMATION FUSION, 2023, 98
[38]   DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion [J].
Wang, Hongfeng ;
Wang, Jianzhong ;
Xu, Haonan ;
Sun, Yong ;
Yu, Zibo .
SENSORS, 2022, 22 (14)
[39]   A general image fusion framework using multi-task semi-supervised learning [J].
Wang, Wu ;
Deng, Liang-Jian ;
Vivone, Gemine .
INFORMATION FUSION, 2024, 108
[40]   FLFuse-Net: A fast and lightweight infrared and visible image fusion network via feature flow and edge compensation for salient information [J].
Weimin, Xue ;
Anhong, Wang ;
Lijun, Zhao .
INFRARED PHYSICS & TECHNOLOGY, 2022, 127