SFINet: A semantic feature interactive learning network for full-time infrared and visible image fusion

被引:0
|
作者
Song, Wenhao [1 ]
Li, Qilei [1 ,2 ]
Gao, Mingliang [1 ]
Chehri, Abdellah [3 ]
Jeon, Gwanggil [4 ]
机构
[1] Shandong Univ Technol, Sch Elect & Elect Engn, Zibo 255000, Shandong, Peoples R China
[2] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London E1 4NS, England
[3] Royal Mil Coll Canada, Dept Math & Comp Sci, Kingston, ON K7K 7B4, Canada
[4] Incheon Natl Univ, Dept Embedded Syst Engn, Incheon 22012, South Korea
关键词
Image fusion; Deep learning; Semantic information; Attention mechanism; HYBRID MULTISCALE DECOMPOSITION; NEST;
D O I
10.1016/j.eswa.2024.125472
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Infrared and visible image fusion aims to combine data from various source images to generate a high-quality image. Nevertheless, numerous fusion methods often prioritize visual quality above semantic information. To address this problem, we present a Semantic Feature Interactive Learning Network (SFINet) for full-time infrared and visible images. The SFINet encompasses an image fusion network and an image segmentation network through a Semantic Feature Interaction (SFI) module. The image fusion network employs Multi-scale Feature Extraction (MFE) modules to capture global and local information at multiple scales. Meanwhile, it performs an adaptive fusion of complementary information using a Dual Attention Feature Fusion (DAFF) module. The image segmentation network guides the image fusion network using the SFI module for semantic feature interaction. Comparative results prove that the proposed method is superior to state-of-the-art (SOTA) models in image fusion and semantic segmentation tasks. The code is available at https://github.com/ songwenhao123/SFINet.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] MSFNet: MultiStage Fusion Network for infrared and visible image fusion
    Wang, Chenwu
    Wu, Junsheng
    Zhu, Zhixiang
    Chen, Hao
    NEUROCOMPUTING, 2022, 507 : 26 - 39
  • [32] A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation
    Hou, Jilei
    Zhang, Dazhi
    Wu, Wei
    Ma, Jiayi
    Zhou, Huabing
    ENTROPY, 2021, 23 (03)
  • [33] Visual fidelity and full-scale interaction driven network for infrared and visible image fusion
    Mei, Liye
    Hu, Xinglong
    Ye, Zhaoyi
    Ye, Zhiwei
    Xu, Chuan
    Liu, Sheng
    Lei, Cheng
    PATTERN RECOGNITION, 2025, 165
  • [34] SimpliFusion: a simplified infrared and visible image fusion network
    Liu, Yong
    Li, Xingyuan
    Liu, Yong
    Zhong, Wei
    VISUAL COMPUTER, 2025, 41 (02) : 1335 - 1350
  • [35] A Dual Cross Attention Transformer Network for Infrared and Visible Image Fusion
    Zhou, Zhuozhi
    Lan, Jinhui
    2024 7TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND BIG DATA, ICAIBD 2024, 2024, : 494 - 499
  • [36] Infrared and Visible Image Fusion via Decoupling Network
    Wang, Xue
    Guan, Zheng
    Yu, Shishuang
    Cao, Jinde
    Li, Ya
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [37] Infrared and visible image fusion using a feature attention guided perceptual generative adversarial network
    Chen Y.
    Zheng W.
    Shin H.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (07) : 9099 - 9112
  • [38] Semantic-Aware Infrared and Visible Image Fusion
    Zhou, Wenhao
    Wu, Wei
    Zhou, Huabing
    2021 4TH INTERNATIONAL CONFERENCE ON ROBOTICS, CONTROL AND AUTOMATION ENGINEERING (RCAE 2021), 2021, : 82 - 85
  • [39] Infrared and Visible Image Fusion Based on Semantic Segmentation
    Zhou H.
    Hou J.
    Wu W.
    Zhang Y.
    Wu Y.
    Ma J.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (02): : 436 - 443
  • [40] Semantic perceptive infrared and visible image fusion Transformer
    Yang, Xin
    Huo, Hongtao
    Li, Chang
    Liu, Xiaowen
    Wang, Wenxi
    Wang, Cheng
    PATTERN RECOGNITION, 2024, 149