Multi-focus image fusion based on pulse coupled neural network and WSEML in DTCWT domain

被引:0
作者
Jia, Yuan [1 ]
Ma, Tiande [2 ]
机构
[1] Renmin Univ China, Sch Stat, Beijing, Peoples R China
[2] Xinjiang Univ, Sch Comp Sci & Technol, Urumqi, Peoples R China
来源
FRONTIERS IN PHYSICS | 2025年 / 13卷
关键词
multi-focus image; image fusion; DTCWT; PCNN; WSEML; FRAMEWORK;
D O I
10.3389/fphy.2025.1575606
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The goal of multi-focus image fusion is to merge near-focus and far-focus images of the same scene to obtain an all-focus image that accurately and comprehensively represents the focus information of the entire scene. The current multi-focus fusion algorithms lead to issues such as the loss of details and edges, as well as local blurring in the resulting images. To solve these problems, a novel multi-focus image fusion method based on pulse coupled neural network (PCNN) and weighted sum of eight-neighborhood-based modified Laplacian (WSEML) in dual-tree complex wavelet transform (DTCWT) domain is proposed in this paper. The source images are decomposed by DTCWT into low- and high-frequency components, respectively; then the average gradient (AG) motivate PCNN-based fusion rule is used to process the low-frequency components, and the WSEML-based fusion rule is used to process the high-frequency components; we conducted simulation experiments on the public Lytro dataset, demonstrating the superiority of the algorithm we proposed.
引用
收藏
页数:11
相关论文
共 67 条
  • [1] Quadtree-based multi-focus image fusion using a weighted focus-measure
    Bai, Xiangzhi
    Zhang, Yu
    Zhou, Fugen
    Xue, Bindang
    [J]. INFORMATION FUSION, 2015, 22 : 105 - 118
  • [2] Basu S., 2025, SN COMPUT SCI, V6, P150, DOI [10.1007/s42979-025-03678-y, DOI 10.1007/S42979-025-03678-Y]
  • [3] CsdlFusion: An Infrared and Visible Image Fusion Method Based on LatLRR-NSST and Compensated Saliency Detection
    Chen, Hui
    Wu, Ziming
    Sun, Zihui
    Yang, Ning
    Menhas, Muhammad llyas
    Ahmad, Bilal
    [J]. JOURNAL OF THE INDIAN SOCIETY OF REMOTE SENSING, 2025, 53 (01) : 117 - 134
  • [4] Chen KR, 2025, Arxiv, DOI [arXiv:2501.15043, 10.48550/arXiv.2501.15043, DOI 10.48550/ARXIV.2501.15043]
  • [5] RADFNet: An infrared and visible image fusion framework based on distributed network
    Feng, Siling
    Wu, Can
    Lin, Cong
    Huang, Mengxing
    [J]. FRONTIERS IN PLANT SCIENCE, 2023, 13
  • [6] BCMFIFuse: A Bilateral Cross-Modal Feature Interaction-Based Network for Infrared and Visible Image Fusion
    Gao, Xueyan
    Liu, Shiguang
    [J]. REMOTE SENSING, 2024, 16 (17)
  • [7] Multimodal fusion of different medical image modalities using optimised hybrid network
    Ghosh, Tanima
    Jayanthi, N.
    [J]. INTERNATIONAL JOURNAL OF AD HOC AND UBIQUITOUS COMPUTING, 2025, 48 (01) : 19 - 33
  • [8] A Wavelet Decomposition Method for Estimating Soybean Seed Composition with Hyperspectral Data
    Giri, Aviskar
    Sagan, Vasit
    Alifu, Haireti
    Maiwulanjiang, Abuduwanli
    Sarkar, Supria
    Roy, Bishal
    Fritschi, Felix B.
    [J]. REMOTE SENSING, 2024, 16 (23)
  • [9] ZMFF: Zero-shot multi-focus image fusion
    Hu, Xingyu
    Jiang, Junjun
    Liu, Xianming
    Ma, Jiayi
    [J]. INFORMATION FUSION, 2023, 92 : 127 - 138
  • [10] Multi-focus image fusion method based on adaptive weighting and interactive information modulation
    Jiang, Jinyuan
    Zhai, Hao
    Yang, You
    Xiao, Xuan
    Wang, Xinbo
    [J]. MULTIMEDIA SYSTEMS, 2024, 30 (05)