AFSFusion: An Adjacent Feature Shuffle Combination Network for Infrared and Visible Image Fusion

被引:1
作者
Hu, Yufeng [1 ]
Xu, Shaoping [2 ]
Cheng, Xiaohui [2 ]
Zhou, Changfei [2 ]
Xiong, Minghai [2 ]
机构
[1] Nanchang Univ, Sch Qianhu, Nanchang 330031, Peoples R China
[2] Nanchang Univ, Sch Math & Comp Sci, Nanchang 330031, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 09期
关键词
infrared and visible image fusion; adjacent feature shuffle fusion; adaptive weight adjustment strategy; subjective and objective evaluation; INFORMATION;
D O I
10.3390/app13095640
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
To obtain fused images with excellent contrast, distinct target edges, and well-preserved details, we propose an adaptive image fusion network called the adjacent feature shuffle-fusion network (AFSFusion). The proposed network adopts a UNet-like architecture and incorporates key refinements to enhance network architecture and loss functions. Regarding the network architecture, the proposed two-branch adjacent feature fusion module, called AFSF, expands the number of channels to fuse the feature channels of several adjacent convolutional layers in the first half of the AFSFusion, enhancing its ability to extract, transmit, and modulate feature information. We replace the original rectified linear unit (ReLU) with leaky ReLU to alleviate the problem of gradient disappearance and add a channel shuffling operation at the end of AFSF to facilitate information interaction capability between features. Concerning loss functions, we propose an adaptive weight adjustment (AWA) strategy to assign weight values to the corresponding pixels of the infrared (IR) and visible images in the fused images, according to the VGG16 gradient feature response of the IR and visible images. This strategy efficiently handles different scene contents. After normalization, the weight values are used as weighting coefficients for the two sets of images. The weighting coefficients are applied to three loss items simultaneously: mean square error (MSE), structural similarity (SSIM), and total variation (TV), resulting in clearer objects and richer texture detail in the fused images. We conducted a series of experiments on several benchmark databases, and the results demonstrate the effectiveness of the proposed network architecture and the superiority of the proposed network compared to other state-of-the-art fusion methods. It ranks first in several objective metrics, showing the best performance and exhibiting sharper and richer edges of specific targets, which is more in line with human visual perception. The remarkable enhancement in performance is ascribed to the proposed AFSF module and AWA strategy, enabling balanced feature extraction, fusion, and modulation of image features throughout the process.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] Infrared and Visible Image Fusion via Feature-Oriented Dual-Module Complementary
    Zhang, Yingmei
    Lee, Hyo Jong
    APPLIED SCIENCES-BASEL, 2023, 13 (05):
  • [32] IVJDN: AN END-TO-END NETWORK FOR JOINT INFRARED AND VISIBLE IMAGE FUSION AND DETECTION
    Zhang, Chenglong
    Ran, Qinglin
    Wei, Wei
    Ding, Chen
    Zhang, Lei
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 6534 - 6537
  • [33] GANSD: A generative adversarial network based on saliency detection for infrared and visible image fusion
    Fu, Yinghua
    Liu, Zhaofeng
    Peng, Jiansheng
    Gupta, Rohit
    Zhang, Dawei
    IMAGE AND VISION COMPUTING, 2025, 154
  • [34] Infrared and Visible Image Fusion with Deep Neural Network in Enhanced Flight Vision System
    Gao, Xuyang
    Shi, Yibing
    Zhu, Qi
    Fu, Qiang
    Wu, Yuezhou
    REMOTE SENSING, 2022, 14 (12)
  • [35] FLFuse-Net: A fast and lightweight infrared and visible image fusion network via feature flow and edge compensation for salient information
    Weimin, Xue
    Anhong, Wang
    Lijun, Zhao
    INFRARED PHYSICS & TECHNOLOGY, 2022, 127
  • [36] GTMFuse: Group-attention transformer-driven multiscale dense feature-enhanced network for infrared and visible image fusion
    Mei, Liye
    Hu, Xinglong
    Ye, Zhaoyi
    Tang, Linfeng
    Wang, Ying
    Li, Di
    Liu, Yan
    Hao, Xin
    Lei, Cheng
    Xu, Chuan
    Yang, Wei
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [37] DFG-HCEN: A distinctive-feature guided and hierarchical channel enhanced network-based infrared and visible image fusion
    Gao, Lingna
    Nie, Rencan
    Cao, Jinde
    Zhang, Gucheng
    IMAGE AND VISION COMPUTING, 2024, 148
  • [38] CCAFusion: Cross-Modal Coordinate Attention Network for Infrared and Visible Image Fusion
    Li, Xiaoling
    Li, Yanfeng
    Chen, Houjin
    Peng, Yahui
    Pan, Pan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (02) : 866 - 881
  • [39] Infrared and visible image fusion algorithm for substation equipment based on NSCT and Siamese network
    Yang, Yang
    Yin, Yu
    Yang, Ning
    Li, Lihua
    SIXTH INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION, 2021, 11913
  • [40] DeDNet: Infrared and visible image fusion with noise removal by decomposition-driven network
    Huang, Jingxue
    Li, Xiaosong
    Tan, Haishu
    Yang, Lemiao
    Wang, Gao
    Yi, Peng
    MEASUREMENT, 2024, 237