Multiscale Dilated U-Net Based Multifocus Image Fusion Algorithm

被引:2
作者
Nie Fenghao [1 ]
Li Mengxia [1 ]
Zhou Mengxiang [1 ]
Dong Yuxue [1 ]
Li Zhiliang [1 ]
Li Long [1 ]
机构
[1] Yangtze Univ, Coll Comp Sci, Jingzhou 434023, Hubei, Peoples R China
关键词
image processing; multi-focus image; image fusion; multi-scale; dilated convolution; CURVELET;
D O I
10.3788/LOP232443
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Current multifocus fusion algorithms use only a single-image feature extraction scale, leading to problems such as loss of detail edges and local blurring in imaging. In response to these algorithms, this paper proposes a multifocus image fusion algorithm based on multiscale null U-Net. First, in the encoder part of U-Net, a multiscale null module was introduced to replace the traditional convolution module, which fully uses sensory fields with various scales to capture local and global information more comprehensively. In addition, to enhance the image feature characterization further, a RFB-s module was employed in the middle layer of U-Net to optimize the localization ability of multiscale features. The proposed fusion algorithm adopted the end-to-end supervised learning method in deep learning. This method was divided into three modules: feature extraction, feature fusion, and image reconstruction. Among these, the feature extraction module used U-Net containing multiscale null modules. Experimental results show that the fused images obtained using the proposed algorithm have clear detailed texture and are free of overlapping artifacts. Among all multifocus image fusion algorithms used for comparison, the proposed algorithm is optimal in terms of average gradient, visual information fidelity, and mutual information evaluation metrics. Additionally, this algorithm achieves suboptimal results close to the optimal results in edge information retention metrics. Meanwhile, the ablation experiment results further verify that the proposed multiscale null module can remarkably enhance the feature extraction capability of the network, thereby improving the quality of image fusion.
引用
收藏
页数:10
相关论文
共 35 条
[1]   Quadtree-based multi-focus image fusion using a weighted focus-measure [J].
Bai, Xiangzhi ;
Zhang, Yu ;
Zhou, Fugen ;
Xue, Bindang .
INFORMATION FUSION, 2015, 22 :105-118
[2]   Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition [J].
Cui, Guangmang ;
Feng, Huajun ;
Xu, Zhihai ;
Li, Qi ;
Chen, Yueting .
OPTICS COMMUNICATIONS, 2015, 341 :199-209
[3]   The PASCAL Visual Object Classes Challenge: A Retrospective [J].
Everingham, Mark ;
Eslami, S. M. Ali ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 111 (01) :98-136
[4]   Synthetic Aperture Radar Image Denoising Algorithm Based on Deep Learning [J].
Fu Xiangwei ;
Shan Huilin ;
Lu Zongkui ;
Wang Xingtao .
ACTA OPTICA SINICA, 2023, 43 (06)
[5]   A quadtree driven image fusion quality assessment [J].
Hossny, M. ;
Nahavandi, S. ;
Creighton, D. .
2007 5TH IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS, VOLS 1-3, 2007, :419-424
[6]   ZMFF: Zero-shot multi-focus image fusion [J].
Hu, Xingyu ;
Jiang, Junjun ;
Liu, Xianming ;
Ma, Jiayi .
INFORMATION FUSION, 2023, 92 :127-138
[7]   Multi-Focus Image Fusion Based on NSCT and Guided Filtering [J].
Li Jiao ;
Yang Yanchun ;
Dang Jianwu ;
Wang Yangping .
LASER & OPTOELECTRONICS PROGRESS, 2018, 55 (07)
[8]   DRPL: Deep Regression Pair Learning for Multi-Focus Image Fusion [J].
Li, Jinxing ;
Guo, Xiaobao ;
Lu, Guangming ;
Zhang, Bob ;
Xu, Yong ;
Wu, Feng ;
Zhang, David .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4816-4831
[9]   Multifocus image fusion by combining curvelet and wavelet transform [J].
Li, Shutao ;
Yang, Bin .
PATTERN RECOGNITION LETTERS, 2008, 29 (09) :1295-1301
[10]   Image Fusion with Guided Filtering [J].
Li, Shutao ;
Kang, Xudong ;
Hu, Jianwen .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (07) :2864-2875