MCADFusion: a novel multi-scale convolutional attention decomposition method for enhanced infrared and visible light image fusion

被引:0
作者
Zhang, Wangwei [1 ]
Dai, Menghao [1 ]
Zhou, Bin [2 ]
Wang, Changhai [1 ]
机构
[1] Zhengzhou Univ Light Ind, Software Engn Coll, 136 Sci Rd, Zhengzhou 450000, Peoples R China
[2] Zhengzhou Univ Sci & Technol, Elect & Elect Engn Coll, 1 Xueyuan Rd, Zhengzhou 450064, Peoples R China
来源
ELECTRONIC RESEARCH ARCHIVE | 2024年 / 32卷 / 08期
关键词
image fusion; multi-scale; convolutional attention decomposition; modal specificity; shared features; ENSEMBLE; NETWORK; NEST;
D O I
10.3934/era.2024233
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
This paper presents a method called MCADFusion, a feature decomposition technique specifically designed for the fusion of infrared and visible images, incorporating target radiance and detailed texture. MCADFusion employs an innovative two-branch architecture that effectively ff ectively extracts and decomposes both local and global features from different ff erent source images, thereby enhancing the processing of image feature information. The method begins with a multi-scale feature extraction module and a reconstructor module to obtain local and global feature information from rich source images. Subsequently, the local and global features of different ff erent source images are decomposed using the the channel attention module (CAM) and the spatial attention module (SAM). Feature fusion is then performed through a two-channel attention merging method. Finally, image reconstruction is achieved using the restormer module. During the training phase, MCADFusion employs a two-stage strategy to optimize the network parameters, resulting in high-quality fused images. Experimental results demonstrate that MCADFusion surpasses existing techniques in both subjective visual evaluation and objective assessment on publicly available TNO and MSRS datasets, underscoring its superiority.
引用
收藏
页码:5067 / 5089
页数:23
相关论文
共 58 条
[1]   HyperTransformer: A Textural and Spectral Feature Fusion Transformer for Pansharpening [J].
Bandara, Wele Gedara Chaminda ;
Patel, Vishal M. .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :1757-1767
[2]  
Ben Hamza A, 2005, INTEGR COMPUT-AID E, V12, P135
[3]   Neural Feature Search for RGB-Infrared Person Re-Identification [J].
Chen, Yehansen ;
Wan, Lin ;
Li, Zhihang ;
Jing, Qianyan ;
Sun, Zongyuan .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :587-597
[4]   DARNet: Deep Active Ray Network for Building Segmentation [J].
Cheng, Dominic ;
Liao, Renjie ;
Fidler, Sanja ;
Urtasun, Raquel .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7423-7431
[5]  
Chung HYJ, 2024, Arxiv, DOI [arXiv:2209.14687, DOI 10.48550/ARXIV.2209.14687]
[6]   Deep Convolutional Neural Network for Multi-Modal Image Restoration and Fusion [J].
Deng, Xin ;
Dragotti, Pier Luigi .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (10) :3333-3348
[7]  
Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]
[8]  
He C., 2023, Advances in Neural Information Processing Systems, V36
[9]   Camouflaged Object Detection with Feature Decomposition and Edge Reconstruction [J].
He, Chunming ;
Li, Kai ;
Zhang, Yachao ;
Tang, Longxiang ;
Zhang, Yulun ;
Guo, Zhenhua ;
Li, Xiu .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :22046-22055
[10]  
He CM, 2024, Arxiv, DOI [arXiv:2308.03166, DOI 10.48550/ARXIV.2308.03166]