A Deep Model for Multi-Focus Image Fusion Based on Gradients and Connected Regions

被引:50
作者
Xu, Han [1 ]
Fan, Fan [1 ,2 ]
Zhang, Hao [1 ]
Le, Zhuliang [1 ]
Huang, Jun [1 ,2 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Hubei, Peoples R China
[2] Wuhan Univ, Inst Aerosp Sci & Technol, Wuhan 430072, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-focus image fusion; unsupervised learning; connected regions; TRANSFORM; SEGMENTATION;
D O I
10.1109/ACCESS.2020.2971137
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose a novel unsupervised model for multi-focus image fusion based on gradients and connected regions, termed as GCF. To overcome the stumbling block of vanishing gradients in applying deep networks for multi-focus image fusion, we design the Mask-Net which can directly generate a binary mask. Thus, there is no need for hand-crafted feature extraction or fusion rules. Based on the fact that objects within the depth-of-field (DOF) have shaper appearance, i.e., larger gradients, we use the gradient relation map obtained from source images to narrow the solution domain and speed up convergence. Then, the constraint of connected region numbers is conductive to finding the more accurate binary mask. With the consistency verification strategy, the final mask can be obtained by adapting the initial binary mask to generate the fused result. Therefore, the proposed method is an unsupervised model without the need of the ground-truth data. Both qualitative and quantitative experiments are conducted on the publicly available Lytro dataset. The results show that GCF can outperform the state-of-the-art in both visual perception and objective metrics.
引用
收藏
页码:26316 / 26327
页数:12
相关论文
共 36 条
[1]  
[Anonymous], 2016, CoRR abs/1602.02830
[2]  
[Anonymous], ICML
[3]   Infrared and visible image fusion based on target-enhanced multiscale transform decomposition [J].
Chen, Jun ;
Li, Xuejiao ;
Luo, Linbo ;
Mei, Xiaoguang ;
Ma, Jiayi .
INFORMATION SCIENCES, 2020, 508 :64-78
[4]   Image Segmentation-Based Multi-Focus Image Fusion Through Multi-Scale Convolutional Neural Network [J].
Du, Chaoben ;
Gao, Shesheng .
IEEE ACCESS, 2017, 5 :15750-15761
[5]   FuseGAN: Learning to Fuse Multi-Focus Image via Conditional Generative Adversarial Network [J].
Guo, Xiaopeng ;
Nie, Rencan ;
Cao, Jinde ;
Zhou, Dongming ;
Mei, Liye ;
He, Kangjian .
IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (08) :1982-1996
[6]   A non-reference image fusion metric based on mutual information of image features [J].
Haghighat, Mohammad Bagher Akbari ;
Aghagolzadeh, Ali ;
Seyedarabi, Hadi .
COMPUTERS & ELECTRICAL ENGINEERING, 2011, 37 (05) :744-756
[7]   Multi-focus image fusion for visual sensor networks in DCT domain [J].
Haghighat, Mohammad Bagher Akbari ;
Aghagolzadeh, Ali ;
Seyedarabi, Nadi .
COMPUTERS & ELECTRICAL ENGINEERING, 2011, 37 (05) :789-797
[8]  
Hu X, 2010, PROCEEDINGS OF 2010 INTERNATIONAL SYMPOSIUM ON CONSTRUCTION ECONOMY AND MANAGEMENT (ISCEM2010), P171
[9]   Densely Connected Convolutional Networks [J].
Huang, Gao ;
Liu, Zhuang ;
van der Maaten, Laurens ;
Weinberger, Kilian Q. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2261-2269
[10]   Multi-focus image fusion based on nonsubsampled contourlet transform and focused regions detection [J].
Li, Huafeng ;
Chai, Yi ;
Li, Zhaofei .
OPTIK, 2013, 124 (01) :40-51