MFSFFuse: Multi-receptive Field Feature Extraction for Infrared and Visible Image Fusion Using Self-supervised Learning

被引:0
|
作者
Gao, Xueyan [1 ]
Liu, Shiguang [1 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300350, Peoples R China
来源
NEURAL INFORMATION PROCESSING, ICONIP 2023, PT VI | 2024年 / 14452卷
关键词
Infrared and Visible Image; Image Fusion; Multi-receptive Field Feature Extraction; Self-supervised; NETWORK;
D O I
10.1007/978-981-99-8076-5_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The infrared and visible image fusion aims to fuse complementary information in different modalities to improve image quality and resolution, and facilitate subsequent visual tasks. Most of the current fusion methods suffer from incomplete feature extraction or redundancy, resulting in indistinctive targets or lost texture details. Moreover, the infrared and visible image fusion lacks ground truth, and the fusion results obtained by using unsupervised network training models may also cause the loss of important features. To solve these problems, we propose an infrared and visible image fusion method using self-supervised learning, called MFSFFuse. To overcome these challenges, we introduce a Multi-Receptive Field dilated convolution block that extracts multi-scale features using dilated convolutions. Additionally, different attention modules are employed to enhance information extraction in different branches. Furthermore, a specific loss function is devised to guide the optimization of the model to obtain an ideal fusion result. Extensive experiments show that, compared to the state-of-the-art methods, our method has achieved competitive results in both quantitative and qualitative experiments.
引用
收藏
页码:118 / 132
页数:15
相关论文
共 50 条
  • [21] Visible and Infrared Image Fusion Using Deep Learning
    Zhang, Xingchen
    Demiris, Yiannis
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 10535 - 10554
  • [22] An infrared and visible image fusion method based on deep convolutional feature extraction
    Pang Z.-X.
    Liu G.-H.
    Chen C.-M.
    Liu H.-T.
    Kongzhi yu Juece/Control and Decision, 2024, 39 (03): : 910 - 918
  • [23] IBFusion: An Infrared and Visible Image Fusion Method Based on Infrared Target Mask and Bimodal Feature Extraction Strategy
    Bai, Yang
    Gao, Meijing
    Li, Shiyu
    Wang, Ping
    Guan, Ning
    Yin, Haozheng
    Yan, Yonghao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 10610 - 10622
  • [24] Infrared and Visible Image Fusion Method by Using Hybrid Representation Learning
    He, Guiqing
    Ji, Jiaqi
    Dong, Dandan
    Wang, Jun
    Fan, Jianping
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2019, 16 (11) : 1796 - 1800
  • [25] MSCS: Multi-stage feature learning with channel-spatial attention mechanism for infrared and visible image fusion
    Huang, Zhenghua
    Xu, Biyun
    Xia, Menghan
    Li, Qian
    Zou, Lianying
    Li, Shaoyi
    Li, Xi
    INFRARED PHYSICS & TECHNOLOGY, 2024, 142
  • [26] Infrared and Visible Images Registration Using Feature and Area for Image Fusion
    Zhang, Xiuqiong
    Qin, Hongyin
    Wang, Mingrong
    Yang, Jian
    FOURTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2011): MACHINE VISION, IMAGE PROCESSING, AND PATTERN ANALYSIS, 2012, 8349
  • [27] DSAFuse: Infrared and visible image fusion via dual-branch spatial adaptive feature extraction
    Shen, Shixian
    Feng, Yong
    Liu, Nianbo
    Liu, Ming
    Li, Yingna
    NEUROCOMPUTING, 2025, 616
  • [28] DBIF: Dual-Branch Feature Extraction Network for Infrared and Visible Image Fusion
    Zhang, Haozhe
    Cui, Rongpu
    Zheng, Zhuohang
    Gao, Shaobing
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT VIII, 2025, 15038 : 309 - 323
  • [29] Using full-scale feature fusion for self-supervised indoor depth estimation
    Deqiang Cheng
    Junhui Chen
    Chen Lv
    Chenggong Han
    He Jiang
    Multimedia Tools and Applications, 2024, 83 : 28215 - 28233
  • [30] Using full-scale feature fusion for self-supervised indoor depth estimation
    Cheng, Deqiang
    Chen, Junhui
    Lv, Chen
    Han, Chenggong
    Jiang, He
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (09) : 28215 - 28233