S2CANet: A self-supervised infrared and visible image fusion based on co-attention network

被引:1
|
作者
Li, Dongyang [1 ]
Nie, Rencan [1 ,2 ]
Cao, Jinde [3 ,4 ]
Zhang, Gucheng [1 ]
Jin, Biaojian [1 ]
机构
[1] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650500, Peoples R China
[2] Southeast Univ, Sch Automat, Nanjing 210096, Peoples R China
[3] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
[4] Yonsei Univ, Yonsei Frontier Lab, Seoul 03722, South Korea
基金
中国国家自然科学基金;
关键词
Infrared and visible image fusion; Self-supervised; Co-attention; Weighted fidelity loss; MULTI-FOCUS IMAGE; SPARSE REPRESENTATION; SHEARLET TRANSFORM; INFORMATION; FRAMEWORK;
D O I
10.1016/j.image.2024.117131
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Existing methods for infrared and visible image fusion (IVIF) often overlook the analysis of common and distinct features among source images. Consequently, this study develops A self -supervised infrared and visible image fusion based on co -attention network, incorporating auxiliary networks and backbone networks in its design. The primary concept is to transform both common and distinct features into common features and reconstructed features, subsequently deriving the distinct features through their subtraction. To enhance the similarity of common features, we designed the fusion block based on co -attention (FBC) module specifically for this purpose, capturing common features through co -attention. Moreover, fine-tuning the auxiliary network enhances the image reconstruction effectiveness of the backbone network. It is noteworthy that the auxiliary network is exclusively employed during training to guide the self -supervised completion of IVIF by the backbone network. Additionally, we introduce a novel estimate for weighted fidelity loss to guide the fused image in preserving more brightness from the source image. Experiments conducted on diverse benchmark datasets demonstrate the superior performance of our S2CANet over state-of-the-art IVIF methods.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] A Dual Cross Attention Transformer Network for Infrared and Visible Image Fusion
    Zhou, Zhuozhi
    Lan, Jinhui
    2024 7TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND BIG DATA, ICAIBD 2024, 2024, : 494 - 499
  • [32] MFSFFuse: Multi-receptive Field Feature Extraction for Infrared and Visible Image Fusion Using Self-supervised Learning
    Gao, Xueyan
    Liu, Shiguang
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT VI, 2024, 14452 : 118 - 132
  • [33] Unsupervised Infrared and Visible Image Fusion with Pixel Self-attention
    Cui, Saijia
    Zhou, Zhiqiang
    Li, Linhao
    Fei, Erfang
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 437 - 441
  • [34] Infrared and visible image fusion network based on low-light image enhancement and attention mechanism
    Jinbo Lu
    Zhen Pei
    Jinling Chen
    Kunyu Tan
    Qi Ran
    Hongyan Wang
    Signal, Image and Video Processing, 2025, 19 (6)
  • [35] Multi-Level Adaptive Attention Fusion Network for Infrared and Visible Image Fusion
    Hu, Ziming
    Kong, Quan
    Liao, Qing
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 366 - 370
  • [36] S2AC: Self-Supervised Attention Correlation Alignment Based on Mahalanobis Distance for Image Recognition
    Wang, Zhi-Yong
    Kang, Dae-Ki
    Zhang, Cui-Ping
    ELECTRONICS, 2023, 12 (21)
  • [37] Hyperspectral and Multispectral Image Fusion by Deep Neural Network in a Self-Supervised Manner
    Gao, Jianhao
    Li, Jie
    Jiang, Menghui
    REMOTE SENSING, 2021, 13 (16)
  • [38] Co-attention Based Feature Fusion Network for Spam Review Detection on Douban
    Cai, Huanyu
    Yu, Ke
    Zhou, Yuhao
    Wu, Xiaofei
    NEURAL PROCESSING LETTERS, 2022, 54 (06) : 5251 - 5271
  • [39] Co-attention Based Feature Fusion Network for Spam Review Detection on Douban
    Huanyu Cai
    Ke Yu
    Yuhao Zhou
    Xiaofei Wu
    Neural Processing Letters, 2022, 54 : 5251 - 5271
  • [40] Infrared and visible image fusion based on multi-scale dense attention connection network
    Chen Y.
    Zhang J.
    Wang Z.
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2022, 30 (18): : 2253 - 2266