Multi-focus image fusion with convolutional neural network based on Dempster-Shafer theory

被引:5
作者
Li L. [1 ]
Li C. [1 ]
Lu X. [1 ]
Wang H. [1 ]
Zhou D. [1 ]
机构
[1] National Key Laboratory of Aerospace Flight Dynamics, School of Astronautics, Northwestern Polytechnical University, Xi'an
来源
Optik | 2023年 / 272卷
基金
中国国家自然科学基金;
关键词
Convolutional neural network; Dempster-Shafer theory; Multi-focus image fusion;
D O I
10.1016/j.ijleo.2022.170223
中图分类号
学科分类号
摘要
Convolutional neural networks (CNN) have been applied to many fields including image classification. Multi-focus image fusion can be regarded as the classification of focused areas and unfocused areas. Therefore, CNN has been widely used in multi-focus image fusion. However, most methods only use information from the last convolutional layer to complete the fusion task, which leads to a suboptimal fusion result. Aiming to solve this problem, we propose a novel convolutional neural network based on the Dempster-Shafer theory (DST) for multi-focus image fusion. Firstly, as a theoretical method for dealing with uncertain issues, the DST is introduced to fuse the results from different branch layers, thus increasing the reliability of the results. Also, a gradient residual block is designed to boost the utilization of edge information by the network while reducing the dimension from feature maps in the branch layers, thereby improving the performance of the network and reducing the number of training parameters. Compared with other state-of-the-art fusion methods, the decision map of the proposed method is more precise. And objectively, the average metrics of our proposed method for the 20 images from the ``Lytro" and ``Nature" datasets perform best in terms of information entropy, mutual information, structural similarity metric, and visual perception metric. © 2022 Elsevier GmbH
引用
收藏
相关论文
共 49 条
  • [1] Amin-Naji M., Aghagolzadeh A., Ezoji M., Ensemble of CNN for multi-focus image fusion, Inf. Fusion, 51, pp. 201-214, (2019)
  • [2] Yang W., Liu D., You Q., Chen B., Jian M., Hu Q., Cong M., Ma K., Multi-focus image fusion: a survey of the state of the art, Inf. Fusion, 64, pp. 71-79, (2020)
  • [3] Bhat S., Kounda D., Multi-focus image fusion techniques: a survey, Artif. Intell. Rev., 54, pp. 5735-5787, (2021)
  • [4] Li H., Manjunath B.S., Mitra S.K., Mitra, Multisensor image fusion using the wavelet transform, Graph. Models Image Process., 57, pp. 235-245, (1995)
  • [5] Peng G., Wang Z., Liu S., Zhuang S., Image fusion by combining multiwavelet with nonsubsampled direction filter bank, Soft Comput., 21, pp. 1977-1989, (2017)
  • [6] Burt P.J., Kolczynsk R.J., Enhanced image capture through fusion, Int. Conf. Comput. Vis., pp. 173-182, (1993)
  • [7] Zhang Q., Guo B.-L., Multifocus image fusion using the non subsampled contourlet transform, Signal Process, 89, pp. 1334-1346, (2009)
  • [8] Zhu Z., Yin H., Chai Y., Li Y., Qi G., A novel multi-modality image fusion method based on image decomposition and sparse representation, Inf. Sci., 432, pp. 516-529, (2018)
  • [9] Yang B., Li S., Multifocus image fusion and restoration with sparse representation, IEEE Trans. Instrum. Meas., 59, pp. 884-889, (2010)
  • [10] Li Y., Sun Y., Huang X., Qi G., Zheng M., Zhu Z., An image fusion method based on sparse representation and sum modified-Laplacian in NSCT domain, Entropy, 20, (2018)