ResCCFusion: Infrared and visible image fusion network based on ResCC module and spatial criss-cross attention models

被引:5
作者
Xiong, Zhang [1 ]
Zhang, Xiaohui [1 ]
Han, Hongwei [1 ]
Hu, Qingping [1 ]
机构
[1] Naval Univ Engn, Dept Weap Engn, Wuhan 430030, Peoples R China
关键词
Image fusion; Auto-encoder; Residual network; Infrared image; Visible image; INFORMATION; FRAMEWORK; NEST;
D O I
10.1016/j.infrared.2023.104962
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
We proposed an infrared and visible image fusion method based on the ResCC module and spatial criss-cross attention models. The proposed method adopts an auto-encoder structure consisting of an encoder network, fusion layers, and a decoder network. The encoder network has a convolution layer and three ResCC blocks with dense connections. Each ResCC block can extract multi-scale features from source images without downsampling operations and retain as many feature details as possible for image fusion. The fusion layer adopts spatial criss-cross attention models, which can capture contextual information in both horizontal and vertical directions. Attention in these two directions can also reduce the calculation of the attention maps. The decoder network consists of four convolution layers designed to reconstruct images from the feature map. Experiments performed on the public datasets demonstrate that the proposed method obtains better fusion performance on objective and subjective evaluations compared to other advanced fusion methods. The code is available at https ://github.com/xiongzhangzzz/ResCCFusion.
引用
收藏
页数:10
相关论文
共 48 条
  • [11] RFN-Nest: An end-to-end residual fusion network for infrared and visible images
    Li, Hui
    Wu, Xiao-Jun
    Kittler, Josef
    [J]. INFORMATION FUSION, 2021, 73 : 72 - 86
  • [12] NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models
    Li, Hui
    Wu, Xiao-Jun
    Durrani, Tariq
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2020, 69 (12) : 9645 - 9656
  • [13] Li H, 2018, INT C PATT RECOG, P2705, DOI 10.1109/ICPR.2018.8546006
  • [14] DenseFuse: A Fusion Approach to Infrared and Visible Images
    Li, Hui
    Wu, Xiao-Jun
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (05) : 2614 - 2623
  • [15] Image Fusion with Guided Filtering
    Li, Shutao
    Kang, Xudong
    Hu, Jianwen
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (07) : 2864 - 2875
  • [16] Li XL, 2017, IEEE T CYBERNETICS, V47, P3840, DOI 10.1109/TCYB.2016.2585355
  • [17] Microsoft COCO: Common Objects in Context
    Lin, Tsung-Yi
    Maire, Michael
    Belongie, Serge
    Hays, James
    Perona, Pietro
    Ramanan, Deva
    Dollar, Piotr
    Zitnick, C. Lawrence
    [J]. COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 : 740 - 755
  • [18] MFST: Multi-Modal Feature Self-Adaptive Transformer for Infrared and Visible Image Fusion
    Liu, Xiangzeng
    Gao, Haojie
    Miao, Qiguang
    Xi, Yue
    Ai, Yunfeng
    Gao, Dingguo
    [J]. REMOTE SENSING, 2022, 14 (13)
  • [19] Liu Y, 2017, 2017 20TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), P1070
  • [20] Image Fusion With Convolutional Sparse Representation
    Liu, Yu
    Chen, Xun
    Ward, Rabab K.
    Wang, Z. Jane
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2016, 23 (12) : 1882 - 1886