Infrared and Visible Image Fusion Network with Multi-Relation Perception

被引:0
作者
Li, Xiaoling [1 ]
Chen, Houjin [1 ]
Wang, Minjun [1 ]
Sun, Jia [1 ]
Li, Yanfeng [1 ]
Chen, Luyifu [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing 100044, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Infrared image; Visible image; Multi-relation perception; Wavelet transform; PERFORMANCE;
D O I
10.11999/JEIT231062
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A multi-relation perception network for infrared and visible image fusion is proposed in this paper to fully integrate consistent features and complementary features between infrared and visible images. First, a dual-branch encoder module is used to extract features from the source images. The extracted features are then fed into the fusion strategy module based on multi-relation perception. Finally, a decoder module is used to reconstruct the fused features and generate the final fused image. In this fusion strategy module, the feature relationship perception and the weight relationship perception are constructed by exploring the interactions between the shared relationship, the differential relationship, and the cumulative relationship across different modalities, so as to integrate consistent features and complementary features between different modalities and obtain fused features. To constrain network training and preserve the intrinsic features of the source images, a wavelet transform-based loss function is developed to assist in preserving low-frequency components and high-frequency components of the source images during the fusion process. Experiments indicate that, compared to the state-of-the-art deep learning-based image fusion methods, the proposed method can fully integrate consistent features and complementary features between source images, thereby successfully preserving the background information of visible images and the thermal targets of infrared images. Overall, the fusion performance of the proposed method surpasses that of the compared methods.
引用
收藏
页码:2217 / 2227
页数:11
相关论文
共 25 条
[1]  
[Anonymous], 2023, Journal of Electronics & Information Technology, V45, P2749, DOI [10.11999/JEIT221361, DOI 10.11999/JEIT221361]
[2]   A human perception inspired quality metric for image fusion based on regional information [J].
Chen, Hao ;
Varshney, Pramod K. .
INFORMATION FUSION, 2007, 8 (02) :193-207
[3]   MUFusion: A general unsupervised image fusion network based on memory unit [J].
Cheng, Chunyang ;
Xu, Tianyang ;
Wu, Xiao-Jun .
INFORMATION FUSION, 2023, 92 :80-92
[4]  
GAO Shaobing, Multi- 3
[5]   Coordinate Attention for Efficient Mobile Network Design [J].
Hou, Qibin ;
Zhou, Daquan ;
Feng, Jiashi .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :13708-13717
[6]   RFN-Nest: An end-to-end residual fusion network for infrared and visible images [J].
Li, Hui ;
Wu, Xiao-Jun ;
Kittler, Josef .
INFORMATION FUSION, 2021, 73 :72-86
[7]   DenseFuse: A Fusion Approach to Infrared and Visible Images [J].
Li, Hui ;
Wu, Xiao-Jun .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (05) :2614-2623
[8]   MrFDDGAN: Multireceptive Field Feature Transfer and Dual Discriminator-Driven Generative Adversarial Network for Infrared and Color Visible Image Fusion [J].
Li, Junwu ;
Li, Binhua ;
Jiang, Yaoxi ;
Tian, Longwei ;
Cai, Weiwei .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
[9]  
Linhao QU, 2022, 36 AAAI C ART INT, P2126, DOI [10.1609/aaai.v36i2.20109, DOI 10.1609/AAAI.V36I2.20109]
[10]   CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion [J].
Liu, Jinyuan ;
Lin, Runjia ;
Wu, Guanyao ;
Liu, Risheng ;
Luo, Zhongxuan ;
Fan, Xin .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (05) :1748-1775