Infrared and visible image fusion using quantum computing induced edge preserving filter

被引:2
|
作者
Parida, Priyadarsan [1 ]
Panda, Manoj Kumar [1 ]
Rout, Deepak Kumar [2 ]
Panda, Saroj Kumar [3 ]
机构
[1] GIET Univ, Dept Elect & Commun Engn, Rayagada 765022, Odisha, India
[2] IIIT Bhubaneswar, Dept Elect & Telecommun Engn, Bhubaneswar 751003, Odisha, India
[3] Veer Surendra Sai Univ Technol, Dept Elect Engn, Sambalpur 768018, Odisha, India
关键词
Image fusion; Edge detail; Quantum computing; Weight map; Infrared; Visible; MULTISCALE TRANSFORM; NETWORK; FRAMEWORK;
D O I
10.1016/j.imavis.2024.105344
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Information fusion by utilization of visible and thermal images provides amore comprehensive scene understanding in the resulting image rather than individual source images. It applies to wide areas of applications such as navigation, surveillance, remote sensing, and military where significant information is obtained from diverse modalities making it quite challenging. The challenges involved in integrating the various sources of data are due to the diverse modalities of imaging sensors along with the complementary information. So, there is a need for precise information integration in terms of infrared (IR) and visible image fusion while retaining useful information from both sources. Therefore, in this article, a unique image fusion methodology is presented that focuses on enhancing the prominent details of both images, preserving the textural information with reduced noise from either of the sources. In this regard, we put forward a quantum computing-induced IR and visible image fusion technique which preserves the required information with highlighted details from the source images efficiently. Initially, the proposed edge detail preserving strategy is capable of retaining the salient details accurately from the source images. Further, the proposed quantum computing-induced weight map generation mechanism preserves the complementary details with fewer redundant details which produces quantum details. Again the prominent features of the source images are retained using highly rich information. Finally, the quantum and the prominent details are utilized to produce the fused image for the corresponding source image pair. Both subjective and objective analyses are utilized to validate the effectiveness of the proposed algorithm. The efficacy of the developed model is validated by comparing the outcomes attained by it against twenty-six existing fusion algorithms. From various experiments, it is observed that the developed framework achieved higher accuracy in terms of visual demonstration as well as quantitative assessments compared to different deep-learning and non-deep learning-based state-of-the-art (SOTA) techniques.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Infrared and visible image fusion via gradientlet filter
    Ma, Jiayi
    Zhou, Yi
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2020, 197
  • [2] Infrared and visible image fusion based on edge-preserving guided filter and infrared feature decomposition
    Ren, Long
    Pan, Zhibin
    Cao, Jianzhong
    Zhang, Hui
    Wang, Hao
    SIGNAL PROCESSING, 2021, 186
  • [3] MERFusion: A multiscale edge-preserving filter combined with Retinex enhancement for infrared and visible image fusion
    Yang, Chenxuan
    He, Yunan
    Sun, Ce
    Hao, Qun
    Cao, Jie
    Optics and Laser Technology, 2025, 188
  • [4] Infrared and visible image fusion with the use of multi-scale edge-preserving decomposition and guided image filter
    Gan, Wei
    Wu, Xiaohong
    Wu, Wei
    Yang, Xiaomin
    Ren, Chao
    He, Xiaohai
    Liu, Kai
    INFRARED PHYSICS & TECHNOLOGY, 2015, 72 : 37 - 51
  • [5] Fast infrared and visible image fusion with structural decomposition
    Li, Hui
    Qi, Xianbiao
    Xie, Wuyuan
    KNOWLEDGE-BASED SYSTEMS, 2020, 204 (204)
  • [6] Infrared and visible image fusion with edge detail implantation
    Liu, Junyu
    Zhang, Yafei
    Li, Fan
    FRONTIERS IN PHYSICS, 2023, 11
  • [7] Attribute filter based infrared and visible image fusion
    Mo, Yan
    Kang, Xudong
    Duan, Puhong
    Sun, Bin
    Li, Shutao
    INFORMATION FUSION, 2021, 75 : 41 - 54
  • [8] Structural similarity preserving GAN for infrared and visible image fusion
    Zhang, Di
    Zhou, Yong
    Zhao, Jiaqi
    Zhou, Ziyuan
    Yao, Rui
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2021, 19 (01)
  • [9] A weight induced contrast map for infrared and visible image fusion
    Panda, Manoj Kumar
    Parida, Priyadarsan
    Rout, Deepak Kumar
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 117
  • [10] Infrared and visible image fusion via detail preserving adversarial learning
    Ma, Jiayi
    Liang, Pengwei
    Yu, Wei
    Chen, Chen
    Guo, Xiaojie
    Wu, Jia
    Jiang, Junjun
    INFORMATION FUSION, 2020, 54 : 85 - 98