AFSFusion: An Adjacent Feature Shuffle Combination Network for Infrared and Visible Image Fusion

被引:1
|
作者
Hu, Yufeng [1 ]
Xu, Shaoping [2 ]
Cheng, Xiaohui [2 ]
Zhou, Changfei [2 ]
Xiong, Minghai [2 ]
机构
[1] Nanchang Univ, Sch Qianhu, Nanchang 330031, Peoples R China
[2] Nanchang Univ, Sch Math & Comp Sci, Nanchang 330031, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 09期
关键词
infrared and visible image fusion; adjacent feature shuffle fusion; adaptive weight adjustment strategy; subjective and objective evaluation; INFORMATION;
D O I
10.3390/app13095640
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
To obtain fused images with excellent contrast, distinct target edges, and well-preserved details, we propose an adaptive image fusion network called the adjacent feature shuffle-fusion network (AFSFusion). The proposed network adopts a UNet-like architecture and incorporates key refinements to enhance network architecture and loss functions. Regarding the network architecture, the proposed two-branch adjacent feature fusion module, called AFSF, expands the number of channels to fuse the feature channels of several adjacent convolutional layers in the first half of the AFSFusion, enhancing its ability to extract, transmit, and modulate feature information. We replace the original rectified linear unit (ReLU) with leaky ReLU to alleviate the problem of gradient disappearance and add a channel shuffling operation at the end of AFSF to facilitate information interaction capability between features. Concerning loss functions, we propose an adaptive weight adjustment (AWA) strategy to assign weight values to the corresponding pixels of the infrared (IR) and visible images in the fused images, according to the VGG16 gradient feature response of the IR and visible images. This strategy efficiently handles different scene contents. After normalization, the weight values are used as weighting coefficients for the two sets of images. The weighting coefficients are applied to three loss items simultaneously: mean square error (MSE), structural similarity (SSIM), and total variation (TV), resulting in clearer objects and richer texture detail in the fused images. We conducted a series of experiments on several benchmark databases, and the results demonstrate the effectiveness of the proposed network architecture and the superiority of the proposed network compared to other state-of-the-art fusion methods. It ranks first in several objective metrics, showing the best performance and exhibiting sharper and richer edges of specific targets, which is more in line with human visual perception. The remarkable enhancement in performance is ascribed to the proposed AFSF module and AWA strategy, enabling balanced feature extraction, fusion, and modulation of image features throughout the process.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Attention based dual UNET network for infrared and visible image fusion
    Wang, Xuejiao
    Hua, Zhen
    Li, Jinjiang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (25) : 66959 - 66980
  • [22] Adjustable Visible and Infrared Image Fusion
    Wu, Boxiong
    Nie, Jiangtao
    Wei, Wei
    Zhang, Lei
    Zhang, Yanning
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (12) : 13463 - 13477
  • [23] PSMFF: A progressive series-parallel modality feature filtering framework for infrared and visible image fusion
    Xie, Shidong
    Li, Haiyan
    Wang, Zhengyu
    Zhou, Dongming
    Ding, Zhaisheng
    Liu, Yanyu
    DIGITAL SIGNAL PROCESSING, 2023, 134
  • [24] Infrared and visible image fusion with convolutional neural networks
    Liu, Yu
    Chen, Xun
    Cheng, Juan
    Peng, Hu
    Wang, Zengfu
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2018, 16 (03)
  • [25] ESFuse: Weak Edge Structure Perception Network for Infrared and Visible Image Fusion
    Liu, Wuyang
    Tan, Haishu
    Cheng, Xiaoqi
    Li, Xiaosong
    ELECTRONICS, 2024, 13 (20)
  • [26] Infrared and Visible Image Fusion Based on Dual Channel Residual Dense Network
    Feng Xin
    Yang Jieming
    Zhang Hongde
    Qiu Guohang
    ACTA PHOTONICA SINICA, 2023, 52 (11)
  • [27] Infrared and Visible Image Fusion via Texture Conditional Generative Adversarial Network
    Yang, Yong
    Liu, Jiaxiang
    Huang, Shuying
    Wan, Weiguo
    Wen, Wenying
    Guan, Juwei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (12) : 4771 - 4783
  • [28] A Review on Infrared and Visible Image Fusion Techniques
    Patel, Ami
    Chaudhary, Jayesh
    INTELLIGENT COMMUNICATION TECHNOLOGIES AND VIRTUAL MOBILE NETWORKS, ICICV 2019, 2020, 33 : 127 - 144
  • [29] A deep learning and image enhancement based pipeline for infrared and visible image fusion
    Qi, Jin
    Eyob, Deboch
    Fanose, Mola Natnael
    Wang, Lingfeng
    Cheng, Jian
    NEUROCOMPUTING, 2024, 578
  • [30] Infrared-Visible Image Fusion through Feature-Based Decomposition and Domain Normalization
    Chen, Weiyi
    Miao, Lingjuan
    Wang, Yuhao
    Zhou, Zhiqiang
    Qiao, Yajun
    REMOTE SENSING, 2024, 16 (06)