FLFuse-Net: A fast and lightweight infrared and visible image fusion network via feature flow and edge compensation for salient information

被引:32
作者
Weimin, Xue [1 ]
Anhong, Wang [1 ]
Lijun, Zhao [1 ]
机构
[1] Taiyuan Univ Sci & Technol, Instittue Digital Media & Commun, Taiyuan 030024, Peoples R China
基金
中国国家自然科学基金;
关键词
Infrared and visible image fusion; Lightweight image fusion method; Deeplearning based image fusion; MULTI-FOCUS IMAGE; SHEARLET TRANSFORM; FRAMEWORK;
D O I
10.1016/j.infrared.2022.104383
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
In this paper, a fast, lightweight image fusion network, FLFuse-Net, is proposed to generate a new perspective image with identical and discriminative features from both infrared and visible images. In this network, deep convolutional features are extracted and fused synchronously through feature flow, while the edge features of the salient targets from the infrared image are compensated asynchronously. First, we design an autoencoder network structure with cross-connections for simultaneous feature extraction and fusion. In this structure, the fusion strategy is carried out through feature flow rather than by using a fixed fusion strategy, as in previous works. Second, we propose an edge compensation branch for salient information with the corresponding edge loss function to obtain the edge features of salient information from infrared images. Third, our network is designed as a lightweight network with a small number of parameters and low computational complexity, resulting in lower hardware requirements and a faster calculation speed. The experimental results confirm that the proposed FLFuse-Net outperforms the state-of-the-art fusion methods in objective and subjective assessments with very few parameters, especially on the TNO Image Fusion and NIR Scenes datasets.
引用
收藏
页数:9
相关论文
共 49 条
[1]  
Ben Hamza A, 2005, INTEGR COMPUT-AID E, V12, P135
[2]   Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain [J].
Bhatnagar, Gaurav ;
Wu, Q. M. Jonathan ;
Liu, Zheng .
IEEE TRANSACTIONS ON MULTIMEDIA, 2013, 15 (05) :1014-1024
[3]  
Brown M, 2011, PROC CVPR IEEE, P177, DOI 10.1109/CVPR.2011.5995637
[4]   ORB-SLAM with Near-infrared images and Optical Flow data [J].
Buemi, Antonio ;
Bruna, Arcangelo ;
Petinot, Sylvain ;
Roux, Nicolas .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, :1799-1804
[5]   Infrared and visible image fusion based on target-enhanced multiscale transform decomposition [J].
Chen, Jun ;
Li, Xuejiao ;
Luo, Linbo ;
Mei, Xiaoguang ;
Ma, Jiayi .
INFORMATION SCIENCES, 2020, 508 :64-78
[6]   TIRNet: Object detection in thermal infrared images for autonomous driving [J].
Dai, Xuerui ;
Yuan, Xue ;
Wei, Xueye .
APPLIED INTELLIGENCE, 2021, 51 (03) :1244-1261
[7]   FuseGAN: Learning to Fuse Multi-Focus Image via Conditional Generative Adversarial Network [J].
Guo, Xiaopeng ;
Nie, Rencan ;
Cao, Jinde ;
Zhou, Dongming ;
Mei, Liye ;
He, Kangjian .
IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (08) :1982-1996
[8]   A non-reference image fusion metric based on mutual information of image features [J].
Haghighat, Mohammad Bagher Akbari ;
Aghagolzadeh, Ali ;
Seyedarabi, Hadi .
COMPUTERS & ELECTRICAL ENGINEERING, 2011, 37 (05) :744-756
[9]   Infrared and visible image fusion based on target extraction in the nonsubsampled contourlet transform domain [J].
He, Kangjian ;
Zhou, Dongming ;
Zhang, Xuejie ;
Nie, Rencan ;
Wang, Quan ;
Jin, Xin .
JOURNAL OF APPLIED REMOTE SENSING, 2017, 11
[10]   An Adaptive Fusion Algorithm for Visible and Infrared Videos Based on Entropy and the Cumulative Distribution of Gray Levels [J].
Hu, Hai-Miao ;
Wu, Jiawei ;
Li, Bo ;
Guo, Qiang ;
Zheng, Jin .
IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (12) :2706-2719