Automated deep learning design for medical image classification by health-care professionals with no coding experience: a feasibility study

被引:192
作者
Faes, Livia [1 ,4 ]
Wagner, Siegfried K. [2 ,3 ,4 ]
Fu, Dun Jack [4 ]
Liu, Xiaoxuan [2 ,3 ,5 ,6 ]
Korot, Edward [4 ,7 ]
Ledsam, Joseph R. [8 ]
Back, Trevor [8 ]
Chopra, Reena [2 ,3 ,4 ,8 ]
Pontikos, Nikolas [2 ,3 ]
Kern, Christoph [4 ,9 ]
Moraes, Gabriella [4 ]
Schmid, Martin K. [1 ]
Sim, Dawn [2 ,3 ,4 ]
Balaskas, Konstantinos [2 ,3 ,4 ]
Bachmann, Lucas M. [10 ]
Denniston, Alastair K. [2 ,3 ,5 ,6 ,11 ]
Keane, Pearse A. [2 ,3 ,4 ]
机构
[1] Cantonal Hosp Lucerne, Dept Ophthalmol, Luzern, Switzerland
[2] Moorfields Eye Hosp Natl Hlth Serv Fdn Trust, Biomed Res Ctr, Natl Inst Hlth Res, London, England
[3] UCL, Inst Ophthalmol, London EC1V 2PD, England
[4] Moorfields Eye Hosp Natl Hlth Serv Fdn Trust, Med Retina Dept, London, England
[5] Univ Hosp Birmingham Natl Hlth Serv Fdn Trust, Dept Ophthalmol, Birmingham, W Midlands, England
[6] Univ Birmingham, Inst Inflammat & Ageing, Acad Unit Ophthalmol, Birmingham, W Midlands, England
[7] Beaumont Eye Inst, Royal Oak, MI USA
[8] DeepMind, London, England
[9] Ludwig Maximilians Univ Munchen, Univ Hosp, Dept Ophthalmol, Munich, Germany
[10] Medigntion, Zurich, Switzerland
[11] Univ Birmingham, Inst Appl Hlth Res, Ctr Patient Reported Outcome Res, Birmingham, W Midlands, England
关键词
BIAS;
D O I
10.1016/S2589-7500(19)30108-6
中图分类号
R-058 [];
学科分类号
摘要
Background Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding-and no deep learning-expertise. Methods We used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset. Findings Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73.3-97.0%; specificity 67-100%; AUPRC 0.87-1.00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0.57 to 1.00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0.47, with a sensitivity of 49% and a positive predictive value of 52%. Interpretation All models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets. Copyright (C) 2019 The Author(s). Published by Elsevier Ltd.
引用
收藏
页码:E232 / E242
页数:11
相关论文
共 42 条
[1]  
[Anonymous], AI MULTIPLE 0101
[2]  
[Anonymous], MAKING NEURAL NETS U
[3]  
[Anonymous], 2017, GOOGLE AI BLOG
[4]  
[Anonymous], HIPAA J 0219
[5]  
[Anonymous], 2015, DEEP LEARNING NATURE, DOI [10.1038/nature14539, DOI 10.1038/NATURE14539]
[6]  
[Anonymous], 2017, 2017 10 INT C IM SIG
[7]  
[Anonymous], 2019, WHAT CLINICIANS WANT
[8]  
[Anonymous], 2016, P ICML WORKSH HUM IN
[9]  
[Anonymous], GOOGLE CLOUD 0518
[10]  
[Anonymous], 2008 IEEE RSJ INT C