Uncertainty aware and explainable diagnosis of retinal disease

被引:7
作者
Singh, Amitojdeep [1 ,2 ]
Sengupta, Sourya [1 ,2 ]
Rasheeda, Mohammed Abdul [1 ]
Jayakumara, Varadharajan [1 ]
Lakshminarayanana, Vasudevan [1 ,2 ]
机构
[1] Univ Waterloo, Sch Optometry & Vis Sci, Theoret & Expt Epistemol Lab, Waterloo, ON, Canada
[2] Univ Waterloo, Dept Syst Design Engn, Waterloo, ON, Canada
来源
MEDICAL IMAGING 2021: IMAGING INFORMATICS FOR HEALTHCARE, RESEARCH, AND APPLICATIONS | 2021年 / 11601卷
基金
加拿大自然科学与工程研究理事会;
关键词
Uncertainty; explainability; deep learning; retinal imaging; Bayesian; attributions; retina; retinal disease;
D O I
10.1117/12.2581362
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning methods for ophthalmic diagnosis have shown considerable success in tasks like segmentation and classification. However, their widespread application is limited due to the models being opaque and vulnerable to making a wrong decision in complicated cases. Explainability methods show the features that a system used to make prediction while uncertainty awareness is the ability of a system to highlight when it is not sure about the decision. This is one of the first studies using uncertainty and explanations for informed clinical decision making. We perform uncertainty analysis of a deep learning model for diagnosis of four retinal diseases - age-related macular degeneration (AMD), central serous retinopathy (CSR), diabetic retinopathy (DR), and macular hole (MH) using images from a publicly available (OCTID) dataset. Monte Carlo (MC) dropout is used at the test time to generate a distribution of parameters and the predictions approximate the predictive posterior of a Bayesian model. A threshold is computed using the distribution and uncertain cases can be referred to the ophthalmologist thus avoiding an erroneous diagnosis. The features learned by the model are visualized using a proven attribution method from a previous study. The effects of uncertainty on model performance and the relationship between uncertainty and explainability are discussed in terms of clinical significance. The uncertainty information along with the heatmaps make the system more trustworthy for use in clinical settings.
引用
收藏
页数:10
相关论文
共 25 条
[1]  
Alber M, 2019, J MACH LEARN RES, V20
[2]  
Arya V, 2019, Arxiv, DOI arXiv:1909.03012
[3]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[4]  
Chen HG, 2019, Arxiv, DOI [arXiv:1911.11888, 10.48550/arXiv.1911.11888]
[5]   Clinically applicable deep learning for diagnosis and referral in retinal disease [J].
De Fauw, Jeffrey ;
Ledsam, Joseph R. ;
Romera-Paredes, Bernardino ;
Nikolov, Stanislav ;
Tomasev, Nenad ;
Blackwell, Sam ;
Askham, Harry ;
Glorot, Xavier ;
O'Donoghue, Brendan ;
Visentin, Daniel ;
van den Driessche, George ;
Lakshminarayanan, Balaji ;
Meyer, Clemens ;
Mackinder, Faith ;
Bouton, Simon ;
Ayoub, Kareem ;
Chopra, Reena ;
King, Dominic ;
Karthikesalingam, Alan ;
Hughes, Cian O. ;
Raine, Rosalind ;
Hughes, Julian ;
Sim, Dawn A. ;
Egan, Catherine ;
Tufail, Adnan ;
Montgomery, Hugh ;
Hassabis, Demis ;
Rees, Geraint ;
Back, Trevor ;
Khaw, Peng T. ;
Suleyman, Mustafa ;
Cornebise, Julien ;
Keane, Pearse A. ;
Ronneberger, Olaf .
NATURE MEDICINE, 2018, 24 (09) :1342-+
[6]  
Gal Y, 2016, PR MACH LEARN RES, V48
[7]   OCTID: Optical coherence tomography image database [J].
Gholami, Peyman ;
Roy, Priyanka ;
Parthasarathy, Mohana Kuppuswamy ;
Lakshminarayanan, Vasudevan .
COMPUTERS & ELECTRICAL ENGINEERING, 2020, 81
[8]  
Holzinger A, 2017, Arxiv, DOI [arXiv:1712.09923, DOI 10.48550/ARXIV.1712.09923]
[9]   Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning [J].
Kaur, Harmanpreet ;
Nori, Harsha ;
Jenkins, Samuel ;
Caruana, Rich ;
Wallach, Hanna ;
Vaughan, Jennifer Wortman .
PROCEEDINGS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'20), 2020,
[10]  
Kendall Alex, WHAT UNCERTAINTIES W