A weight induced contrast map for infrared and visible image fusion

被引:5
作者
Panda, Manoj Kumar [1 ]
Parida, Priyadarsan [1 ]
Rout, Deepak Kumar [2 ]
机构
[1] GIET Univ, Dept Elect & Commun Engn, Rayagada 765022, Odisha, India
[2] IIIT Bhubaneswar, Dept Elect & Telecommun Engn, Bhubaneswar 751003, Odisha, India
关键词
Infrared image; Visible image; Image decomposition; Contrast detection map; Weight map; FRAMEWORK; NETWORK;
D O I
10.1016/j.compeleceng.2024.109256
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Fusion involves merging details from both infrared (IR) and visible images to generate a unified composite image that offers richer and more valuable information than either individual image. Surveillance, navigation, remote sensing, and military applications require various imaging modalities, including visible and IR, to oversee specific scenes. These sensors provide supplementary data and improve situational understanding, so it is essential to fuse this information into a single image. Fusing IR and visible images presents several challenges due to the differences in imaging modalities, data characteristics, and the need for accurate and meaningful integration of information. In this context, a novel image fusion architecture focuses on enhancing prominent targets, with the objective of integrating thermal information from infrared images into visible images while preserving textural details within the visible images. Initially, in the proposed algorithm, the images from different sensors are divided into components of high and low frequencies using a Guided filter and an Average filter, respectively. A unique contrast detection mechanism is proposed that is capable of preserving the contrast information from the original images. Further, the contrast details of the IR and visible images are enhanced using local standard deviation filtering and local range filtering, respectively. We have developed a new weight map construction strategy that can effectively preserve the supplemental data from both the original images. These weights and gradient details of the source images are utilized to preserve the salient feature details of the images acquired from the various modalities. A decision -making approach is utilized among the high -frequency components of the original images to retain the prominent feature details of the source images. Finally, the salient feature details and the prominent feature details are integrated to generate the fused image. The developed technique is validated using both subjective and quantitative perspectives. The developed approaches provide EN, MI, N abf , and SD of 6.86815, 13.73269, 0.15390, and 78.16158 respectively against deep learning -based approaches. Also, the proposed algorithm provides EN, MI, N abf , FMI w , and Q ab f against 6.86815, 13.73269, 0.15390, 0.41634 and 0.47196 respectively against existing traditional fusion methods. It is observed that the developed technique provides adequate accuracy against twenty-seven state-of-the-art techniques.
引用
收藏
页数:16
相关论文
共 37 条
  • [1] Multi-scale Guided Image and Video Fusion: A Fast and Efficient Approach
    Bavirisetti, Durga Prasad
    Xiao, Gang
    Zhao, Junhao
    Dhuli, Ravindra
    Liu, Gang
    [J]. CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2019, 38 (12) : 5576 - 5605
  • [2] Bavirisetti DP, 2017, 2017 20TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), P701
  • [3] Fusion of Infrared and Visible Sensor Images Based on Anisotropic Diffusion and Karhunen-Loeve Transform
    Bavirisetti, Durga Prasad
    Dhuli, Ravindra
    [J]. IEEE SENSORS JOURNAL, 2016, 16 (01) : 203 - 209
  • [4] Deep Convolutional Neural Network for Multi-Modal Image Restoration and Fusion
    Deng, Xin
    Dragotti, Pier Luigi
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (10) : 3333 - 3348
  • [5] Guided Image Filtering
    He, Kaiming
    Sun, Jian
    Tang, Xiaoou
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (06) : 1397 - 1409
  • [6] A survey of infrared and visual image fusion methods
    Jin, Xin
    Jiang, Qian
    Yao, Shaowen
    Zhou, Dongming
    Nie, Rencan
    Hai, Jinjin
    He, Kangjian
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2017, 85 : 478 - 501
  • [7] Kumar Panda Manoj, 2023, Vis Comput, P1
  • [8] LRRNet: A Novel Representation Learning Guided Fusion Network for Infrared and Visible Images
    Li, Hui
    Xu, Tianyang
    Wu, Xiao-Jun
    Lu, Jiwen
    Kittler, Josef
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (09) : 11040 - 11052
  • [9] Li H, 2022, Arxiv, DOI arXiv:1804.08992
  • [10] RFN-Nest: An end-to-end residual fusion network for infrared and visible images
    Li, Hui
    Wu, Xiao-Jun
    Kittler, Josef
    [J]. INFORMATION FUSION, 2021, 73 : 72 - 86