An Infrared and Visible Image Fusion Approach of Self-calibrated Residual Networks and Feature Embedding

被引:0
作者
Dai J. [1 ,2 ]
Luo Z. [1 ,2 ]
Li C. [3 ]
机构
[1] School of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin
[2] Artificial Intelligence Key Laboratory of Sichuan University of Science and Engineering, Yibin
[3] School of Computer Science and Technology, Southwest Minzu University, Chengdu
基金
中国国家自然科学基金;
关键词
algorithms; features embedding; features extraction; Image fusion; image reconstruction; self-calibrated convolutions;
D O I
10.2174/2666255815666220518143643
中图分类号
学科分类号
摘要
Background: The fusion of infrared images and visible images has been a hot topic in the field of image fusion. In the process of image fusion, different methods of feature extraction and processing will directly affect the fusion performance. Objectives: Low resolution (small size) of high-level features will lead to the loss of spatial information. On the other side, the low-level features are not significant due to their insufficient filtering of background and noise. Methods: In order to solve the problem of insufficient feature utilization in existing methods, a new fusion approach (SC-Fuse) based on self-calibrated residual networks (SCNet) and feature embedding has been proposed. The method improves the quality of image fusion from two aspects: feature extraction and feature processing. Results: First, self-calibrated modules are applied to the field of image fusion for the first time, which enlarged the receptive field to make feature maps contain more information. Second, we use ZCA (Zero-phase Component Analysis) and l1-norm to process features, and propose a feature embedding operation to realize the complementarity of feature information at different levels. Conclusion: Finally, a suitable strategy is given to reconstruct the fused image. After ablation experiments and comparison with other representative algorithms, the results show the effectiveness and superiority of SC-Fuse. © 2023 Bentham Science Publishers.
引用
收藏
页码:2 / 13
页数:11
相关论文
共 50 条
[21]   FocusNet: An Infrared and Visible Image Fusion Network Based on Feature Block Separation and Fusion [J].
Li, Yuhang ;
Sun, Le .
2024 17TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER THEORY AND ENGINEERING, ICACTE, 2024, :236-240
[22]   Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion [J].
Wang, Lei ;
Hu, Ziming ;
Kong, Quan ;
Qi, Qian ;
Liao, Qing .
ENTROPY, 2023, 25 (03)
[23]   A SELF-SUPERVISED METHOD FOR INFRARED AND VISIBLE IMAGE FUSION [J].
Lin, Xiaopeng ;
Zhou, Guanxing ;
Zeng, Weihong ;
Tu, Xiaotong ;
Huang, Yue ;
Ding, Xinghao .
2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, :2376-2380
[24]   A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network [J].
Chen, Xianglong ;
Wang, Haipeng ;
Liang, Yaohui ;
Meng, Ying ;
Wang, Shifeng .
SENSORS, 2022, 22 (01)
[25]   A Contrastive Learning Approach for Infrared-Visible Image Fusion [J].
Gupta, Ashish Kumar ;
Barnwal, Meghna ;
Mishra, Deepak .
PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2023, 2023, 14301 :199-208
[26]   RAN: Infrared and Visible Image Fusion Network Based on Residual Attention Decomposition [J].
Yu, Jia ;
Lu, Gehao ;
Zhang, Jie .
ELECTRONICS, 2024, 13 (14)
[27]   Infrared and visible image fusion based on residual dense network and gradient loss [J].
Li, Jiawei ;
Liu, Jinyuan ;
Zhou, Shihua ;
Zhang, Qiang ;
Kasabov, Nikola K. .
INFRARED PHYSICS & TECHNOLOGY, 2023, 128
[28]   Infrared and visible image fusion with improved residual dense generative adversarial network [J].
Min L. ;
Cao S.-J. ;
Zhao H.-C. ;
Liu P.-F. ;
Tai B.-C. .
Kongzhi yu Juece/Control and Decision, 2023, 38 (03) :721-728
[29]   Infrared and Visible Image Fusion via Residual Interactive Transformer and Cross-Attention Fusion [J].
Zhao, Liquan ;
Ke, Chen ;
Jia, Yanfei ;
Xu, Cong ;
Teng, Zhijun .
SENSORS, 2025, 25 (14)
[30]   Infrared and Visible Image Fusion Based on Multiclassification Adversarial Mechanism in Feature Space [J].
Zhang H. ;
Ma J. ;
Fan F. ;
Huang J. ;
Ma Y. .
Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2023, 60 (03) :690-704