A dual-branch infrared and visible image fusion network using progressive image-wise feature transfer

被引:0
作者
Xu, Shaoping [1 ]
Zhou, Changfei [1 ]
Xiao, Jian [1 ]
Tao, Wuyong [1 ]
Dai, Tianyu [1 ]
机构
[1] Nanchang Univ, Sch Math & Comp Sci, Nanchang 330031, Jiangxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Infrared and visible image fusion; Dual-branch fusion network; Progressive image-wise feature transfer; Transformer module; CLIP loss; NEST;
D O I
10.1016/j.jvcir.2024.104190
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To achieve a fused image that contains rich texture details and prominent targets, we present a progressive dual-branch infrared and visible image fusion network called PDFusion, which incorporates the Transformer module. Initially, the proposed network is divided into two branches to extract infrared and visible features independently. Subsequently, the image-wise transfer block (ITB) is introduced to fuse the infrared and visible features at different layers, facilitating the exchange of information between features. The fused features are then fed back into both pathways to contribute to the subsequent feature extraction process. Moreover, in addition to conventional pixel-level and structured loss functions, the contrastive language- image pretraining (CLIP) loss is introduced to guide the network training. Experimental results on publicly available datasets demonstrate the promising performance of PDFusion in the task of infrared and visible image fusion. The exceptional fusion performance of the proposed fusion network can be attributed to the following reasons: (1) The ITB block, particularly with the integration of the Transformer, enhances the capability of representation learning. The Transformer module captures long-range dependencies among image features, enabling a global receptive field that integrates contextual information from the entire image. This leads to a more comprehensive fusion of features. (2) The feature loss based on the CLIP image encoder minimizes the discrepancy between the generated and target images. Consequently, it promotes the generation of semantically coherent and visually appealing fused images. The source code of our method can be found at https://github.com/Changfei-Zhou/PDFusion.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] A Multilevel Hybrid Transmission Network for Infrared and Visible Image Fusion
    Li, Qingqing
    Han, Guangliang
    Liu, Peixun
    Yang, Hang
    Chen, Dianbing
    Sun, Xinglong
    Wu, Jiajia
    Liu, Dongxu
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [42] VMDM-fusion: a saliency feature representation method for infrared and visible image fusion
    Yong Yang
    Jia-Xiang Liu
    Shu-Ying Huang
    Hang-Yuan Lu
    Wen-Ying Wen
    Signal, Image and Video Processing, 2021, 15 : 1221 - 1229
  • [43] VMDM-fusion: a saliency feature representation method for infrared and visible image fusion
    Yang, Yong
    Liu, Jia-Xiang
    Huang, Shu-Ying
    Lu, Hang-Yuan
    Wen, Wen-Ying
    SIGNAL IMAGE AND VIDEO PROCESSING, 2021, 15 (06) : 1221 - 1229
  • [44] Laplacian Pyramid Fusion Network With Hierarchical Guidance for Infrared and Visible Image Fusion
    Yao, Jiaxin
    Zhao, Yongqiang
    Bu, Yuanyang
    Kong, Seong G.
    Chan, Jonathan Cheung-Wai
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 4630 - 4644
  • [45] MFTCFNet: infrared and visible image fusion network based on multi-layer feature tightly coupled
    Hao, Shuai
    Li, Tong
    Ma, Xu
    Li, Tian-Qi
    Qi, Tian-Rui
    Li, Jia-Hao
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (11) : 8217 - 8228
  • [46] SFINet: A semantic feature interactive learning network for full-time infrared and visible image fusion
    Song, Wenhao
    Li, Qilei
    Gao, Mingliang
    Chehri, Abdellah
    Jeon, Gwanggil
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 261
  • [47] Infrared and Harsh Light Visible Image Fusion Using an Environmental Light Perception Network
    Yan, Aiyun
    Gao, Shang
    Lu, Zhenlin
    Jin, Shuowei
    Chen, Jingrong
    ENTROPY, 2024, 26 (08)
  • [48] Unsupervised end-to-end infrared and visible image fusion network using learnable fusion strategy
    Chen, Yili
    Wan, Minjie
    Xu, Yunkai
    Cao, Xiqing
    Zhang, Xiaojie
    Chen, Qian
    Gu, Gouhua
    JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2022, 39 (12) : 2257 - 2270
  • [49] DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network
    Yin, Ruyi
    Yang, Bin
    Huang, Zuyan
    Zhang, Xiaozhi
    SENSORS, 2023, 23 (16)
  • [50] MAGAN: Multiattention Generative Adversarial Network for Infrared and Visible Image Fusion
    Huang, Shuying
    Song, Zixiang
    Yang, Yong
    Wan, Weiguo
    Kong, Xiangkai
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72