A novel perceptual two layer image fusion using deep learning for imbalanced COVID-19 dataset

被引:0
作者
Elzeki O.M. [1 ]
Elfattah M.A. [2 ]
Salem H. [3 ]
Hassanien A.E. [4 ,5 ]
Shams M. [6 ]
机构
[1] Faculty of Computers and Information Sciences, Mansoura University, Mansoura
[2] Misr Higher Institute for Commerce and Computers, Mansoura
[3] Communications and Computers Engineering Department, Faculty of Engineering, Delta University for Science and Technology, Gamasa
[4] Faculty of Computers and Artificial Intelligence, Cairo University, Cairo
[5] Scientific Research Group in Egypt (SRGE), Cairo
[6] Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh
关键词
CNN; Coronavirus; COVID19; Deep learning; Feature analysis; Feature extraction; Image fusion; Machine learning; NSCT; VGG19;
D O I
10.7717/PEERJ-CS.364
中图分类号
学科分类号
摘要
Background and Purpose: COVID-19 is a new strain of viruses that causes life stoppage worldwide. At this time, the new coronavirus COVID-19 is spreading rapidly across the world and poses a threat to people’s health. Experimental medical tests and analysis have shown that the infection of lungs occurs in almost all COVID-19 patients. Although Computed Tomography of the chest is a useful imaging method for diagnosing diseases related to the lung, chest X-ray (CXR) is more widely available, mainly due to its lower price and results. Deep learning (DL), one of the significant popular artificial intelligence techniques, is an effective way to help doctors analyze how a large number of CXR images is crucial to performance. Materials and Methods: In this article, we propose a novel perceptual two-layer image fusion using DL to obtain more informative CXR images for a COVID-19 dataset. To assess the proposed algorithm performance, the dataset used for this work includes 87 CXR images acquired from 25 cases, all of which were confirmed with COVID-19. The dataset preprocessing is needed to facilitate the role of convolutional neural networks (CNN). Thus, hybrid decomposition and fusion of Nonsubsampled Contourlet Transform (NSCT) and CNN_VGG19 as feature extractor was used. Results: Our experimental results show that imbalanced COVID-19 datasets can be reliably generated by the algorithm established here. Compared to the COVID-19 dataset used, the fuzed images have more features and characteristics. In evaluation performance measures, six metrics are applied, such as QAB/F, QMI, PSNR, SSIM, SF, and STD, to determine the evaluation of various medical image fusion (MIF). In the QMI, PSNR, SSIM, the proposed algorithm NSCT + CNN_VGG19 achieves the greatest and the features characteristics found in the fuzed image is the largest. We can deduce that the proposed fusion algorithm is efficient enough to generate CXR COVID-19 images that are more useful for the examiner to explore patient status. Conclusions: A novel image fusion algorithm using DL for an imbalanced COVID-19 dataset is the crucial contribution of this work. Extensive results of the experiment display that the proposed algorithm NSCT + CNN_VGG19 outperforms competitive image fusion algorithms. © Copyright 2021 Elzeki et al.
引用
收藏
页码:1 / 35
页数:34
相关论文
共 80 条
[1]  
Abbas A, Abdelsamea MM, Gaber MM., Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network, Applied Intelligence, 51, 2, pp. 854-864, (2020)
[2]  
Amin-Naji M, Aghagolzadeh A, Ezoji M., Ensemble of CNN for multi-focus image fusion, Information Fusion, 51, 2, pp. 201-214, (2019)
[3]  
Apostolopoulos ID, Mpesiana TA., Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks, Physical and Engineering Sciences in Medicine, 43, 635, (2020)
[4]  
Attallah O, Sharkas MA, Gadelkarim H., Deep learning techniques for automatic detection of embryonic neurodevelopmental disorders, Diagnostics, 10, 1, (2020)
[5]  
Bashir R, Junejo R, Qadri NN, Fleury M, Qadri MY., SWT and PCA image fusion methods for multi-modal imagery, Multimedia Tools and Applications, 78, 2, pp. 1235-1263, (2019)
[6]  
Baumgartl H, Tomas J, Buettner R, Merkel M., A deep learning-based model for defect detection in laser-powder bed fusion using in-situ thermographic monitoring, Progress in Additive Manufacturing, 5, pp. 277-285, (2020)
[7]  
Bhandary A, Prabhu GA, Rajinikanth V, Thanaraj KP, Satapathy SC, Robbins DE, Shasky C, Zhang Y-D, Tavares Jao MRS, Raja NSM., Deep-learning framework to detect lung abnormality: a study with CXR and lung CT scan images, Pattern Recognition Letters, 129, pp. 271-278, (2020)
[8]  
Bhateja V, Patel H, Krishn A, Sahu A, Lay-Ekuakille A., Multi-modal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains, IEEE Sensors Journal, 15, 12, pp. 6783-6790, (2015)
[9]  
Bhatnagar G, Wu QJ, Liu Z., Directive contrast based multi-modal medical image fusion in NSCT domain, IEEE Transactions on Multimedia, 15, 5, pp. 1014-1024, (2013)
[10]  
Bullock J, Pham KH, Lam CSN, Luengo-Oroz M., Mapping the landscape of artificial intelligence applications against COVID-19, (2020)