StackNet-DenVIS: a multi-layer perceptron stacked ensembling approach for COVID-19 detection using X-ray images

被引:21
作者
Autee, Pratik [1 ]
Bagwe, Sagar [1 ]
Shah, Vimal [1 ,2 ]
Srivastava, Kriti [1 ]
机构
[1] Dwarkadas J Sanghvi Coll Engn, Dept Comp Engn, Mumbai, Maharashtra, India
[2] A-602 Venkatesh Pooja,Balaji Complex,150 Feet Rd, Thana 401101, Maharashtra, India
关键词
Covid-19; Stacked generalization; Transfer learning; Deep neural networks; Generative adversarial networks; Image segmentation; LUNG SEGMENTATION;
D O I
10.1007/s13246-020-00952-6
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The highly contagious nature of Coronavirus disease 2019 (Covid-19) resulted in a global pandemic. Due to the relatively slow and taxing nature of conventional testing for Covid-19, a faster method needs to be in place. The current researches have suggested that visible irregularities found in the chest X-ray of Covid-19 positive patients are indicative of the presence of the disease. Hence, Deep Learning and Image Classification techniques can be employed to learn from these irregularities, and classify accordingly with high accuracy. This research presents an approach to create a classifier model named StackNet-DenVIS which is designed to act as a screening process before conducting the existing swab tests. Using a novel approach, which incorporates Transfer Learning and Stacked Generalization, the model aims to lower the False Negative rate of classification compensating for the 30% False Negative rate of the swab tests. A dataset gathered from multiple reliable sources consisting of 9953 Chest X-rays (868 Covid and 9085 Non-Covid) was used. Also, this research demonstrates handling data imbalance using various techniques involving Generative Adversarial Networks and sampling techniques. The accuracy, sensitivity, and specificity obtained on our proposed model were 95.07%, 99.40% and 94.61% respectively. To the best of our knowledge, the combination of accuracy and false negative rate obtained by this paper outperforms the current implementations. We must also highlight that our proposed architecture also considers other types of viral pneumonia. Given the unprecedented sensitivity of our model we are optimistic it contributes to a better Covid-19 detection.
引用
收藏
页码:1399 / 1414
页数:16
相关论文
共 39 条
  • [1] Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks
    Apostolopoulos, Ioannis D.
    Mpesiana, Tzani A.
    [J]. PHYSICAL AND ENGINEERING SCIENCES IN MEDICINE, 2020, 43 (02) : 635 - 640
  • [2] Batista G., 2004, SIGKDD Exp., V6, P20, DOI DOI 10.1145/1007730.1007735
  • [3] Batista Gustavo EAPA, 2003, P WOB, P10
  • [4] A systematic study of the class imbalance problem in convolutional neural networks
    Buda, Mateusz
    Maki, Atsuto
    Mazurowski, Maciej A.
    [J]. NEURAL NETWORKS, 2018, 106 : 249 - 259
  • [5] Lung Segmentation in Chest Radiographs Using Anatomical Atlases With Nonrigid Registration
    Candemir, Sema
    Jaeger, Stefan
    Palaniappan, Kannappan
    Musco, Jonathan P.
    Singh, Rahul K.
    Xue, Zhiyun
    Karargyris, Alexandros
    Antani, Sameer
    Thoma, George
    McDonald, Clement J.
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2014, 33 (02) : 577 - 590
  • [6] Can AI Help in Screening Viral and COVID-19 Pneumonia?
    Chowdhury, Muhammad E. H.
    Rahman, Tawsifur
    Khandakar, Amith
    Mazhar, Rashid
    Kadir, Muhammad Abdul
    Bin Mahbub, Zaid
    Islam, Khandakar Reajul
    Khan, Muhammad Salman
    Iqbal, Atif
    Al Emadi, Nasser
    Reaz, Mamun Bin Ibne
    Islam, Mohammad Tariqul
    [J]. IEEE ACCESS, 2020, 8 : 132665 - 132676
  • [7] Cohen JP, 2020, ARXIV200611988CSEESS
  • [8] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [9] Francois C., 2017, Deep Learning with Python, DOI DOI 10.1186/S12859-020-03546-X
  • [10] Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique
    Greenspan, Hayit
    van Ginneken, Bram
    Summers, Ronald M.
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2016, 35 (05) : 1153 - 1159