MultiFusionNet: multilayer multimodal fusion of deep neural networks for chest X-ray image classification

被引:0
作者
Agarwal, Saurabh [1 ]
Arya, K.V. [1 ]
Meena, Yogesh Kumar [2 ]
机构
[1] Multimedia and Information Security Research Group, ABV-IIITM, Gwalior
[2] Human-AI Interaction (HAIx) Lab, IIT Gandhinagar
关键词
Chest X-ray image; Convolutional neural network (CNN); Disease classifications; Medical image processing; Multilayer fusion model; Multimodal fusion model;
D O I
10.1007/s00500-024-09901-x
中图分类号
学科分类号
摘要
Chest X-ray imaging is a critical diagnostic tool for identifying pulmonary diseases. However, manual interpretation of these images is time-consuming and error-prone. Automated systems utilizing convolutional neural networks (CNNs) have shown promise in improving the accuracy and efficiency of chest X-ray image classification. While previous work has mainly focused on using feature maps from the final convolution layer, there is a need to explore the benefits of leveraging additional layers for improved disease classification. Extracting robust features from limited medical image datasets remains a critical challenge. In this paper, we propose a novel deep learning-based multilayer multimodal fusion model that emphasizes extracting features from different layers and fusing them. Our disease detection model considers the discriminatory information captured by each layer. Furthermore, we propose the fusion of different-sized feature maps (FDSFM) module to effectively merge feature maps from diverse layers. The proposed model achieves a significantly higher accuracy of 97.21% and 99.60% for both three-class and two-class classifications, respectively. The proposed multilayer multimodal fusion model, along with the FDSFM module, holds promise for accurate disease classification and can also be extended to other disease classifications in chest X-ray images. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2024.
引用
收藏
页码:11535 / 11551
页数:16
相关论文
共 62 条
[41]  
Pathak Y., Shukla P.K., Arya K., Deep bidirectional classification model for covid-19 disease infected patients, IEEE/ACM Trans Comput Biol Bioinform, 18, 4, pp. 1234-1241, (2020)
[42]  
Quilodran-Casas C., Silva V.L., Arcucci R., Guo Heaney C.E.Y.C.C., Digital twins based on bidirectional lstm and gan for modelling the covid-19 pandemic, Neurocomputing, 470, pp. 11-28, (2022)
[43]  
Radiopedia, (2020)
[44]  
Rahimzadeh M., Attar A., A modified deep convolutional neural network for detecting covid-19 and pneumonia from chest x-ray images based on the concatenation of xception and resnet50v2, Inform Med Unlock, 19, (2020)
[45]  
Rahman T., Khandakar A., Qiblawey Y., Tahir A., Kiranyaz S., Kashem S.B.A., Exploring the effect of image enhancement techniques on covid-19 detection using chest x-ray images, Comput Biol Med, 132, (2021)
[46]  
Shi W., Tong L., Zhu Y., Wang M.D., Covid-19 automatic diagnosis with radiographic imaging: explainable attention transfer deep neural networks, IEEE J Biomed Health Inform, 25, 7, pp. 2376-2387, (2021)
[47]  
Simonyan K., Zisserman A., Very deep convolutional networks for large-scale image recognition., (2014)
[48]  
Srivastava G., Chauhan A., Jangid M., Chaurasia S., Covixnet: a novel and efficient deep learning model for detection of covid-19 using chest x-ray images, Biomed Signal Process Control, 78, (2022)
[49]  
Subramanian N., Elharrouss O., Al-Maadeed S., Chowdhury M., A review of deep learning-based detection methods for covid-19, Comput Biol Med, (2022)
[50]  
Szegedy C., Ioffe S., Vanhoucke V., Alemi A.A., Inception-v4, inception-resnet and the impact of residual connections on learning, Thirty-First Aaai Conference on Artificial Intelligence, (2017)