Interactive Feature Embedding for Infrared and Visible Image Fusion

被引:8
|
作者
Zhao, Fan [1 ]
Zhao, Wenda [2 ,3 ]
Lu, Huchuan [2 ,3 ]
机构
[1] Liaoning Normal Univ, Sch Phys & Elect Technol, Dalian 116029, Peoples R China
[2] Dalian Univ Technol, Key Lab Intelligent Control & Optimizat Ind Equipm, Minist Educ, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Informat & Commun Engn, Dalian 116024, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Image fusion; Task analysis; Image reconstruction; Fuses; Self-supervised learning; Data mining; Hierarchical representations; infrared and visible image fusion; interactive feature embedding; self-supervised learning; MULTI-FOCUS; SPARSE REPRESENTATION; SHEARLET TRANSFORM; DECOMPOSITION; ENHANCEMENT; INFORMATION; FRAMEWORK;
D O I
10.1109/TNNLS.2023.3264911
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
General deep learning-based methods for infrared and visible image fusion rely on the unsupervised mechanism for vital information retention by utilizing elaborately designed loss functions. However, the unsupervised mechanism depends on a well-designed loss function, which cannot guarantee that all vital information of source images is sufficiently extracted. In this work, we propose a novel interactive feature embedding in a self-supervised learning framework for infrared and visible image fusion, attempting to overcome the issue of vital information degradation. With the help of a self-supervised learning framework, hierarchical representations of source images can be efficiently extracted. In particular, interactive feature embedding models are tactfully designed to build a bridge between self-supervised learning and infrared and visible image fusion learning, achieving vital information retention. Qualitative and quantitative evaluations exhibit that the proposed method performs favorably against state-of-the-art methods.
引用
收藏
页码:12810 / 12822
页数:13
相关论文
共 50 条
  • [11] Infrared and Visible Image Fusion Based on Deep Decomposition Network and Saliency Analysis
    Jian, Lihua
    Rayhana, Rakiba
    Ma, Ling
    Wu, Shaowu
    Liu, Zheng
    Jiang, Huiqin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 24 : 3314 - 3326
  • [12] An Information Retention and Feature Transmission Network for Infrared and Visible Image Fusion
    Liu, Chang
    Yang, Bin
    Li, Yuehua
    Zhang, Xiaozhi
    Pang, Lihui
    IEEE SENSORS JOURNAL, 2021, 21 (13) : 14950 - 14959
  • [13] MetaFusion: Infrared and Visible Image Fusion via Meta-Feature Embedding from Object Detection
    Zhao, Wenda
    Xie, Shigeng
    Zhao, Fan
    He, You
    Lu, Huchuan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 13955 - 13965
  • [14] Infrared and Visible Image Fusion: Methods, Datasets, Applications, and Prospects
    Luo, Yongyu
    Luo, Zhongqiang
    APPLIED SCIENCES-BASEL, 2023, 13 (19):
  • [15] MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion
    Li, Hui
    Wu, Xiao-Jun
    Kittler, Josef
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 4733 - 4746
  • [16] Infrared and Visible Image Fusion Based on Adversarial Feature Extraction and Stable Image Reconstruction
    Su, Weijian
    Huang, Yongdong
    Li, Qiufu
    Zuo, Fengyuan
    Liu, Lijun
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [17] Reflectance estimation for infrared and visible image fusion
    Gu, Yan
    Yang, Feng
    Zhao, Weijun
    Guo, Yiliang
    Min, Chaobo
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2021, 15 (08): : 2749 - 2763
  • [18] Infrared and Visible Image Fusion Based on Gradient Transfer Optimization Model
    Yu, Ruixing
    Chen, Weiyu
    Zhou, Daming
    IEEE ACCESS, 2020, 8 : 50091 - 50106
  • [19] ITFuse: An interactive transformer for infrared and visible image fusion
    Tang, Wei
    He, Fazhi
    Liu, Yu
    PATTERN RECOGNITION, 2024, 156
  • [20] Infrared and visible image fusion methods and applications: A survey
    Ma, Jiayi
    Ma, Yong
    Li, Chang
    INFORMATION FUSION, 2019, 45 : 153 - 178