Detection of COVID-19 in smartphone-based breathing recordings: A pre-screening deep learning tool

被引:34
作者
Alkhodari, Mohanad [1 ]
Khandoker, Ahsan H. [1 ]
机构
[1] Khalifa Univ, Healthcare Engn Innovat Ctr HEIC, Dept Biomed Engn, Abu Dhabi, U Arab Emirates
关键词
NEURAL-NETWORKS; DATA AUGMENTATION; LUNG SOUNDS; CT IMAGES; CLASSIFICATION; FRAMEWORK; DIAGNOSIS; DISEASES; MODEL; TERM;
D O I
10.1371/journal.pone.0262448
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
This study was sought to investigate the feasibility of using smartphone-based breathing sounds within a deep learning framework to discriminate between COVID-19, including asymptomatic, and healthy subjects. A total of 480 breathing sounds (240 shallow and 240 deep) were obtained from a publicly available database named Coswara. These sounds were recorded by 120 COVID-19 and 120 healthy subjects via a smartphone microphone through a website application. A deep learning framework was proposed herein that relies on hand-crafted features extracted from the original recordings and from the mel-frequency cepstral coefficients (MFCC) as well as deep-activated features learned by a combination of convolutional neural network and bi-directional long short-term memory units (CNN-BiLSTM). The statistical analysis of patient profiles has shown a significant difference (p-value: 0.041) for ischemic heart disease between COVID-19 and healthy subjects. The Analysis of the normal distribution of the combined MFCC values showed that COVID-19 subjects tended to have a distribution that is skewed more towards the right side of the zero mean (shallow: 0.59 +/- 1.74, deep: 0.65 +/- 4.35, p-value: <0.001). In addition, the proposed deep learning approach had an overall discrimination accuracy of 94.58% and 92.08% using shallow and deep recordings, respectively. Furthermore, it detected COVID-19 subjects successfully with a maximum sensitivity of 94.21%, specificity of 94.96%, and area under the receiver operating characteristic (AUROC) curves of 0.90. Among the 120 COVID-19 participants, asymptomatic subjects (18 subjects) were successfully detected with 100.00% accuracy using shallow recordings and 88.89% using deep recordings. This study paves the way towards utilizing smartphone-based breathing sounds for the purpose of COVID-19 detection. The observations found in this study were promising to suggest deep learning and smartphone-based breathing sounds as an effective pre-screening tool for COVID-19 alongside the current reverse-transcription polymerase chain reaction (RT-PCR) assay. It can be considered as an early, rapid, easily distributed, time-efficient, and almost no-cost diagnosis technique complying with social distancing restrictions during COVID-19 pandemic.
引用
收藏
页数:25
相关论文
共 87 条
[61]   Evaluation of features for classification of wheezes and normal respiratory sounds [J].
Pramono, Renard Xaviero Adhi ;
Imtiaz, Syed Anas ;
Rodriguez-Villegas, Esther .
PLOS ONE, 2019, 14 (03)
[62]   On the momentum term in gradient descent learning algorithms [J].
Qian, N .
NEURAL NETWORKS, 1999, 12 (01) :145-151
[63]   A Framework for Biomarkers of COVID-19 Based on Coordination of Speech-Production Subsystems [J].
Quatieri, Thomas F. ;
Talkar, Tanya ;
Palmer, Jeffrey S. .
IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY, 2020, 1 :203-206
[64]  
Rabiner L. R., 1978, Digital processing of speech signals, V100
[65]  
Richman JS, 2000, AM J PHYSIOL-HEART C, V278, pH2039
[66]   The Role of Chest Imaging in Patient Management During the COVID-19 Pandemic A Multinational Consensus Statement From the Fleischner Society [J].
Rubin, Geoffrey D. ;
Ryerson, Christopher J. ;
Haramati, Linda B. ;
Sverzellati, Nicola ;
Kanne, Jeffrey P. ;
Raoof, Suhail ;
Schluger, Neil W. ;
Volpi, Annalisa ;
Yim, Jae-Joon ;
Martin, Ian B. K. ;
Anderson, Deverick J. ;
Kong, Christina ;
Altes, Talissa ;
Bush, Andrew ;
Desai, Sujal R. ;
Goldin, Jonathan ;
Goo, Jin Mo ;
Humbert, Marc ;
Inoue, Yoshikazu ;
Kauczor, Hans -Ulrich ;
Luo, Fengming ;
Mazzone, Peter J. ;
Prokop, Mathias ;
Remy-Jardin, Martine ;
Richeldi, Luca ;
Schaefer-Prokop, Cornelia M. ;
Tomiyama, Noriyuki ;
Wells, Athol U. ;
Leung, Ann N. .
CHEST, 2020, 158 (01) :106-116
[67]   A deep-learning based multimodal system for Covid-19 diagnosis using breathing sounds and chest X-ray images [J].
Sait, Unais ;
Lal, K. V. Gokul ;
Shivakumar, Sanjana ;
Kumar, Tarun ;
Bhaumik, Rahul ;
Prajapati, Sunny ;
Bhalla, Kriti ;
Chakrapani, Anaghaa .
APPLIED SOFT COMPUTING, 2021, 109
[68]   Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification [J].
Salamon, Justin ;
Bello, Juan Pablo .
IEEE SIGNAL PROCESSING LETTERS, 2017, 24 (03) :279-283
[69]   Deep learning in neural networks: An overview [J].
Schmidhuber, Juergen .
NEURAL NETWORKS, 2015, 61 :85-117
[70]   Bidirectional recurrent neural networks [J].
Schuster, M ;
Paliwal, KK .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 1997, 45 (11) :2673-2681