Accuracy of automated machine learning in classifying retinal pathologies from ultra-widefield pseudocolour fundus images

被引:34
作者
Antaki, Fares [1 ,2 ]
Coussa, Razek Georges [3 ]
Kahwati, Ghofril [4 ]
Hammamji, Karim [1 ]
Sebag, Mikael [1 ]
Duval, Renaud [2 ]
机构
[1] Ctr Hosp Univ Montreal CHUM, Dept Ophthalmol, Montreal, PQ, Canada
[2] Hop Maisonneuve Rosemt CUO HMR, Dept Ophthalmol, Montreal, PQ, Canada
[3] Univ Iowa Hosp & Clin, Dept Ophthalmol & Visual Sci, Iowa City, IA USA
[4] Ecole Technol Super ETS, Dept Elect Engn, Montreal, PQ, Canada
关键词
retina; imaging; HEALTH-CARE PROFESSIONALS; ARTIFICIAL-INTELLIGENCE;
D O I
10.1136/bjophthalmol-2021-319030
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Aims Automated machine learning (AutoML) is a novel tool in artificial intelligence (AI). This study assessed the discriminative performance of AutoML in differentiating retinal vein occlusion (RVO), retinitis pigmentosa (RP) and retinal detachment (RD) from normal fundi using ultra-widefield (UWF) pseudocolour fundus images. Methods Two ophthalmologists without coding experience carried out AutoML model design using a publicly available image data set (2137 labelled images). The data set was reviewed for low-quality and mislabeled images and then uploaded to the Google Cloud AutoML Vision platform for training and testing. We designed multiple binary models to differentiate RVO, RP and RD from normal fundi and compared them to bespoke models obtained from the literature. We then devised a multiclass model to detect RVO, RP and RD. Saliency maps were generated to assess the interpretability of the model. Results The AutoML models demonstrated high diagnostic properties in the binary classification tasks that were generally comparable to bespoke deep-learning models (area under the precision-recall curve (AUPRC) 0.921-1, sensitivity 84.91%-89.77%, specificity 78.72%-100%). The multiclass AutoML model had an AUPRC of 0.876, a sensitivity of 77.93% and a positive predictive value of 82.59%. The per-label sensitivity and specificity, respectively, were normal fundi (91.49%, 86.75%), RVO (83.02%, 92.50%), RP (72.00%, 100%) and RD (79.55%,96.80%). Conclusion AutoML models created by ophthalmologists without coding experience can detect RVO, RP and RD in UWF images with very good diagnostic accuracy. The performance was comparable to bespoke deep-learning models derived by AI experts for RVO and RP but not for RD.
引用
收藏
页码:90 / 95
页数:6
相关论文
共 30 条
[1]   Predictive modeling of proliferative vitreoretinopathy using automated machine learning by ophthalmologists without coding experience [J].
Antaki, Fares ;
Kahwati, Ghofril ;
Sebag, Julia ;
Coussa, Razek Georges ;
Fanous, Anthony ;
Duval, Renaud ;
Sebag, Mikael .
SCIENTIFIC REPORTS, 2020, 10 (01)
[2]   Deep Learning Detection of Sea Fan Neovascularization From Ultra-Widefield Color Fundus Photographs of Patients With Sickle Cell Hemoglobinopathy [J].
Cai, Sophie ;
Parker, Felix ;
Urias, Muller G. ;
Goldberg, Morton F. ;
Hager, Gregory D. ;
Scott, Adrienne W. .
JAMA OPHTHALMOLOGY, 2021, 139 (02) :206-213
[3]   Machine Learning Techniques in Clinical Vision Sciences [J].
Caixinha, Miguel ;
Nunes, Sandrina .
CURRENT EYE RESEARCH, 2017, 42 (01) :1-15
[4]   Deep Learning and Its Applications in Biomedicine [J].
Cao, Chensi ;
Liu, Feng ;
Tan, Hai ;
Song, Deshou ;
Shu, Wenjie ;
Li, Weizhong ;
Zhou, Yiming ;
Bo, Xiaochen ;
Xie, Zhi .
GENOMICS PROTEOMICS & BIOINFORMATICS, 2018, 16 (01) :17-32
[5]   Reporting of artificial intelligence prediction models [J].
Collins, Gary S. ;
Moons, Karel G. M. .
LANCET, 2019, 393 (10181) :1577-1579
[6]  
Collins GS, 2015, J CLIN EPIDEMIOL, V68, P112, DOI [10.7326/M14-0697, 10.1038/bjc.2014.639, 10.1002/bjs.9736, 10.1136/bmj.g7594, 10.7326/M14-0698, 10.1016/j.jclinepi.2014.11.010, 10.1186/s12916-014-0241-z, 10.1016/j.eururo.2014.11.025]
[7]  
DAmour Alexander.., 2020, Journal of Machine Learning Research, V23, P1
[8]  
Davis J., 2006, P 23 INT C MACH LEAR, P233, DOI 10.1145/1143844.1143874
[9]   Automated deep learning design for medical image classification by health-care professionals with no coding experience: a feasibility study [J].
Faes, Livia ;
Wagner, Siegfried K. ;
Fu, Dun Jack ;
Liu, Xiaoxuan ;
Korot, Edward ;
Ledsam, Joseph R. ;
Back, Trevor ;
Chopra, Reena ;
Pontikos, Nikolas ;
Kern, Christoph ;
Moraes, Gabriella ;
Schmid, Martin K. ;
Sim, Dawn ;
Balaskas, Konstantinos ;
Bachmann, Lucas M. ;
Denniston, Alastair K. ;
Keane, Pearse A. .
LANCET DIGITAL HEALTH, 2019, 1 (05) :E232-E242
[10]   XRAI: Better Attributions Through Regions [J].
Kapishnikov, Andrei ;
Bolukbasi, Tolga ;
Viegas, Fernanda ;
Terry, Michael .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4947-4956