An Infrared and Visible Image Fusion Approach of Self-calibrated Residual Networks and Feature Embedding

被引:0
|
作者
Dai J. [1 ,2 ]
Luo Z. [1 ,2 ]
Li C. [3 ]
机构
[1] School of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin
[2] Artificial Intelligence Key Laboratory of Sichuan University of Science and Engineering, Yibin
[3] School of Computer Science and Technology, Southwest Minzu University, Chengdu
基金
中国国家自然科学基金;
关键词
algorithms; features embedding; features extraction; Image fusion; image reconstruction; self-calibrated convolutions;
D O I
10.2174/2666255815666220518143643
中图分类号
学科分类号
摘要
Background: The fusion of infrared images and visible images has been a hot topic in the field of image fusion. In the process of image fusion, different methods of feature extraction and processing will directly affect the fusion performance. Objectives: Low resolution (small size) of high-level features will lead to the loss of spatial information. On the other side, the low-level features are not significant due to their insufficient filtering of background and noise. Methods: In order to solve the problem of insufficient feature utilization in existing methods, a new fusion approach (SC-Fuse) based on self-calibrated residual networks (SCNet) and feature embedding has been proposed. The method improves the quality of image fusion from two aspects: feature extraction and feature processing. Results: First, self-calibrated modules are applied to the field of image fusion for the first time, which enlarged the receptive field to make feature maps contain more information. Second, we use ZCA (Zero-phase Component Analysis) and l1-norm to process features, and propose a feature embedding operation to realize the complementarity of feature information at different levels. Conclusion: Finally, a suitable strategy is given to reconstruct the fused image. After ablation experiments and comparison with other representative algorithms, the results show the effectiveness and superiority of SC-Fuse. © 2023 Bentham Science Publishers.
引用
收藏
页码:2 / 13
页数:11
相关论文
共 50 条
  • [1] Interactive Feature Embedding for Infrared and Visible Image Fusion
    Zhao, Fan
    Zhao, Wenda
    Lu, Huchuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 12810 - 12822
  • [2] Infrared and visible image fusion algorithm based on split⁃attention residual networks
    Qian K.
    Li T.
    Li Z.
    Chen M.
    Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University, 2022, 40 (06): : 1404 - 1413
  • [3] FDFuse: Infrared and Visible Image Fusion Based on Feature Decomposition
    Cheng, Muhang
    Huang, Haiyan
    Liu, Xiangyu
    Mo, Hongwei
    Wu, Songling
    Zhao, Xiongbo
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74
  • [4] An Information Retention and Feature Transmission Network for Infrared and Visible Image Fusion
    Liu, Chang
    Yang, Bin
    Li, Yuehua
    Zhang, Xiaozhi
    Pang, Lihui
    IEEE SENSORS JOURNAL, 2021, 21 (13) : 14950 - 14959
  • [5] Infrared and Visible Image Fusion via General Feature Embedding From CLIP and DINOv2
    Luo, Yichuang
    Wang, Fang
    Liu, Xiaohu
    IEEE ACCESS, 2024, 12 : 99362 - 99371
  • [6] An Efficient Cross-Modality Self-Calibrated Network for Hyperspectral and Multispectral Image Fusion
    Wu, Huapeng
    Gui, Jie
    Xu, Yang
    Wu, Zebin
    Tang, Yuan Yan
    Wei, Zhihui
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [7] Infrared and Visible Image Fusion Based on Adversarial Feature Extraction and Stable Image Reconstruction
    Su, Weijian
    Huang, Yongdong
    Li, Qiufu
    Zuo, Fengyuan
    Liu, Lijun
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [8] FusionGRAM: An Infrared and Visible Image Fusion Framework Based on Gradient Residual and Attention Mechanism
    Wang, Jinxin
    Xi, Xiaoli
    Li, Dongmei
    Li, Fang
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [9] DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion
    Wang, Hongfeng
    Wang, Jianzhong
    Xu, Haonan
    Sun, Yong
    Yu, Zibo
    SENSORS, 2022, 22 (14)
  • [10] Infrared and visible image fusion based on dilated residual attention network
    Mustafa, Hafiz Tayyab
    Yang, Jie
    Mustafa, Hamza
    Zareapoor, Masoumeh
    OPTIK, 2020, 224 (224):