Automated diagnosis of ear disease using ensemble deep learning with a big otoendoscopy image database

被引:85
作者
Cha, Dongchul [1 ]
Pae, Chongwon [2 ,3 ,4 ]
Seong, Si-Baek [2 ,3 ]
Choi, Jae Young [1 ,3 ]
Park, Hae-Jeong [2 ,3 ,4 ]
机构
[1] Yonsei Univ, Coll Med, Dept Otorhinolaryngol, 50-1 Yonsei Ro, Seoul 03722, South Korea
[2] Yonsei Univ, Ctr Syst & Translat Brain Sci, Inst Human Complex & Syst Sci, Seoul, South Korea
[3] Yonsei Univ, Coll Med, BK21 PLUS Project Med Sci, Seoul, South Korea
[4] Yonsei Univ, Coll Med, Dept Nucl Med, 50-1 Yonsei Ro, Seoul 03722, South Korea
基金
新加坡国家研究基金会;
关键词
Convolutional neural network; Deep learning; Otoendoscopy; Tympanic membrane; Ear disease; Ensemble learning; OTITIS; CLASSIFICATION;
D O I
10.1016/j.ebiom.2019.06.050
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background: Ear and mastoid disease can easily be treated by early detection and appropriate medical care. However, short of specialists and relatively low diagnostic accuracy calls for a new way of diagnostic strategy, in which deep learning may play a significant role. The current study presents a machine learning model to automatically diagnose ear disease using a large database of otoendoscopic images acquired in the clinical environment. Methods: Total 10,544 otoendoscopic images were used to train nine public convolution-based deep neural networks to classify eardrum and external auditory canal features into six categories of ear diseases, covering most ear diseases (Normal, Attic retraction, Tympanic perforation, Otitis externa +/- myringitis, Tumor). After evaluating several optimization schemes, two best-performing models were selected to compose an ensemble classifier, by combining classification scores of each classifier. Findings: According to accuracy and training time, transfer learning models based on Inception-V3 and ResNet101 were chosen and the ensemble classifier using the two models yielded a significant improvement over each model, the accuracy of which is in average 93.67% for the 5-folds cross-validation. Considering substantial data-size dependency of classifier performance in the transfer learning, evaluated in this study, the high accuracy in the current model is attributable to the large database. Interpretation: The current study is unprecedented in terms of both disease diversity and diagnostic accuracy, which is compatible or even better than an average otolaryngologist. The classifier was trained with data in a various acquisition condition, which is suitable for the practical environment. This study shows the usefulness of utilizing a deep learning model in the early detection and treatment of ear disease in the clinical situation. (C) 2019 The Authors. Published by Elsevier B.V.
引用
收藏
页码:606 / 614
页数:9
相关论文
共 31 条
[1]  
[Anonymous], 2016, SQUEEZENET ALEXNET L
[2]  
[Anonymous], PROC CVPR IEEE
[3]  
[Anonymous], 31 AAAI C ART INT
[4]  
[Anonymous], EXAMINATION DIAGNOSI
[5]  
[Anonymous], CHRO SUPP OT MED BUR
[6]  
[Anonymous], 2017, COMMUN ACM, DOI DOI 10.1145/3065386
[7]   Using Deep Learning and Transfer Learning to Accurately Diagnose Early-Onset Glaucoma From Macular Optical Coherence Tomography Images [J].
Asaoka, Ryo ;
Murata, Hiroshi ;
Hirasawa, Kazunori ;
Fujino, Yuri ;
Matsuura, Masato ;
Miki, Atsuya ;
Kanamoto, Takashi ;
Ikeda, Yoko ;
Mori, Kazuhiko ;
Iwase, Aiko ;
Shoji, Nobuyuki ;
Inoue, Kenji ;
Yamagami, Junkichi ;
Araie, Makoto .
AMERICAN JOURNAL OF OPHTHALMOLOGY, 2019, 198 :136-145
[8]   Asynchronous Video-Otoscopy with a Telehealth Facilitator [J].
Biagio, Leigh ;
Swanepoel, De Wet ;
Adeyemo, Adebolajo ;
Hall, James W., III ;
Vinck, Bart .
TELEMEDICINE AND E-HEALTH, 2013, 19 (04) :252-258
[9]   Is it possible to diagnose acute otitis media accurately in primary health care? [J].
Blomgren, K ;
Pitkäranta, A .
FAMILY PRACTICE, 2003, 20 (05) :524-527
[10]   Improve the Performance of Transfer Learning Without Fine-Tuning Using Dissimilarity-Based Multi-view Learning for Breast Cancer Histology Images [J].
Cao, Hongliu ;
Bernard, Simon ;
Heutte, Laurent ;
Sabourin, Robert .
IMAGE ANALYSIS AND RECOGNITION (ICIAR 2018), 2018, 10882 :779-787