An autoencoder deep residual network model for multi focus image fusion

被引:1
作者
Shihabudeen, H. [1 ]
Rajeesh, J. [2 ]
机构
[1] APJ Abdul Kalam Technol Univ, Coll Engn Thalassery, Thalassery 670107, Kerala, India
[2] Coll Engn Kidangoor, Dept Elect, Kottayam 686583, Kerala, India
关键词
Deep Learning; Deep CNN; Image fusion; Decoder; Multifocus; Depth of field;
D O I
10.1007/s11042-023-16991-6
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Image fusion technology consolidates data from various source images of a similar objective and performs extremely effective data complementation, which is commonly used in the transportation, medication, and surveillance fields. Because of the imaging instrument's depth of field limitations, it is very hard to catch all the details of the scene and miss some important features. To solve this problem, this study provides a competent multi-focus image fusing technique based on deep learning. The algorithm collect features from the source input and feed these feature vectors into the convolutional neural network (CNN) to create feature maps. As a result, the focus map collects critical data for the image fusion. Focusmaps collected by the encoder is combined by using L2 norm and nuclear norm methods. Combined focusmaps are then given to Deep CNN to have the source images transformed effectively to the focus image. The proposed nuclear norm-based fusion model provides good evaluation metrics for Entropy, Mutual Information, normalized MI, Q(abf), and Structural Similarity Index Measure with values 7.6855, 8.7312, 1.1168, 0.7579, and 0.8669, respectively. The L2 norm strategy also provides good computational and experimental efficiency over other approaches. According to the experimental analysis of different approaches, the proposed research outperforms many other existing systems on a variety of performance parameters.
引用
收藏
页码:34773 / 34794
页数:22
相关论文
共 55 条
[1]   NSCT and focus measure optimization based multi-focus image fusion [J].
Aishwarya, N. ;
BennilaThangammal, C. ;
Praveena, N. G. .
JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2021, 41 (01) :903-915
[2]  
Amin-Naji M., 2018, J AI DATA MINING, V6, P233, DOI DOI 10.22044/JADM.2017.5169.1624
[3]   Ensemble of CNN for multi-focus image fusion [J].
Amin-Naji, Mostafa ;
Aghagolzadeh, Ali ;
Ezoji, Mehdi .
INFORMATION FUSION, 2019, 51 :201-214
[4]   A pixel based multi-focus image fusion method [J].
Aslantas, Veysel ;
Toprak, Ahmet Nusret .
OPTICS COMMUNICATIONS, 2014, 332 :350-358
[5]   Multi-focus image fusion for different datasets with super-resolution using gradient-based new fusion rule [J].
Aymaz, Samet ;
Kose, Cemal ;
Aymaz, Seyma .
MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (19-20) :13311-13350
[6]   Multi-scale Guided Image and Video Fusion: A Fast and Efficient Approach [J].
Bavirisetti, Durga Prasad ;
Xiao, Gang ;
Zhao, Junhao ;
Dhuli, Ravindra ;
Liu, Gang .
CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2019, 38 (12) :5576-5605
[7]   Robust Multi-Focus Image Fusion Using Edge Model and Multi-Matting [J].
Chen, Yibo ;
Guan, Jingwei ;
Cham, Wai-Kuen .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (03) :1526-1541
[8]  
Cvejic N, 2006, Int J Signal Process
[9]   Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure [J].
De, Ishita ;
Chanda, Bhabatosh .
INFORMATION FUSION, 2013, 14 (02) :136-146
[10]   Multi-focus image fusion using deep support value convolutional neural network [J].
Du, ChaoBen ;
Gao, SheSheng ;
Liu, Ying ;
Gao, BingBing .
OPTIK, 2019, 176 :567-578