ZMFF: Zero-shot multi-focus image fusion

被引:56
作者
Hu, Xingyu [1 ]
Jiang, Junjun [1 ]
Liu, Xianming [1 ]
Ma, Jiayi [2 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Peoples R China
[2] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
关键词
Multi-focus image fusion; Deep image prior; Deep convolutional neural network; QUALITY ASSESSMENT; ALGORITHM; NETWORK; MFF;
D O I
10.1016/j.inffus.2022.11.014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-focus image fusion (MFF) is an effective way to eliminate the out-of-focus blur generated in the imaging process. The difficulties in distinguishing different blur levels and the lack of real supervised data make multi-focus image fusion remain a challenging task after decades of research. According to deep image prior (DIP) (Ulyanov et al., 2018), a neural network itself can capture the low-level statistics of a single image and is successfully used as a prior for solving many inverse problems without the need for handmade priors or priors learned from large-scale datasets. Motivated by this idea, we propose a novel multi-focus image fusion framework named ZMFF comprised of a deep image prior network to model the deep prior of the fused image and a deep mask prior network to model the deep prior of the focus map corresponding to each source image. Without the labor-intensive training pair collection, our method achieves zero-shot learning and avoids the domain shifting problem due to the inconsistency between the manually degraded multi-focus images and the real ones. As far as we know, it is the first unsupervised and untrained deep model for the MFF task. Extensive experiments on both synthetic and real-world datasets demonstrate the promising performance, generalization and flexibility of our approach. Source code is available at https://github.com/junjun-jiang/ZMFF.
引用
收藏
页码:127 / 138
页数:12
相关论文
共 64 条
  • [1] Ensemble of CNN for multi-focus image fusion
    Amin-Naji, Mostafa
    Aghagolzadeh, Ali
    Ezoji, Mehdi
    [J]. INFORMATION FUSION, 2019, 51 : 201 - 214
  • [2] Quadtree-based multi-focus image fusion using a weighted focus-measure
    Bai, Xiangzhi
    Zhang, Yu
    Zhou, Fugen
    Xue, Bindang
    [J]. INFORMATION FUSION, 2015, 22 : 105 - 118
  • [3] Burt P. J., 1985, Proceedings of the SPIE - The International Society for Optical Engineering, V575, P173, DOI 10.1117/12.966501
  • [4] Robust Multi-Focus Image Fusion Using Edge Model and Multi-Matting
    Chen, Yibo
    Guan, Jingwei
    Cham, Wai-Kuen
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (03) : 1526 - 1541
  • [5] A new automated quality assessment algorithm for image fusion
    Chen, Yin
    Blum, Rick S.
    [J]. IMAGE AND VISION COMPUTING, 2009, 27 (10) : 1421 - 1432
  • [6] Multi-Focus Image Fusion Based on Convolution Neural Network for Parkinson's Disease Image Classification
    Dai, Yin
    Song, Yumeng
    Liu, Weibin
    Bai, Wenhe
    Gao, Yifan
    Dong, Xinyang
    Lv, Wenbo
    [J]. DIAGNOSTICS, 2021, 11 (12)
  • [7] "Double-DIP" : Unsupervised Image Decomposition via Coupled Deep-Image-Priors
    Gandelsman, Yossi
    Shocher, Assaf
    Irani, Michal
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11018 - 11027
  • [8] Single fog image restoration with multi-focus image fusion
    Gao, Yin
    Su, Yijing
    Li, Qiming
    Li, Jun
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 55 : 586 - 595
  • [9] FuseGAN: Learning to Fuse Multi-Focus Image via Conditional Generative Adversarial Network
    Guo, Xiaopeng
    Nie, Rencan
    Cao, Jinde
    Zhou, Dongming
    Mei, Liye
    He, Kangjian
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (08) : 1982 - 1996
  • [10] Multi-focus image fusion for visual sensor networks in DCT domain
    Haghighat, Mohammad Bagher Akbari
    Aghagolzadeh, Ali
    Seyedarabi, Nadi
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2011, 37 (05) : 789 - 797