Addressing Artificial Intelligence Bias in Retinal Diagnostics

被引:86
作者
Burlina, Philippe [1 ,2 ,3 ,4 ]
Joshi, Neil [1 ]
Paul, William [1 ]
Pacheco, Katia D. [5 ]
Bressler, Neil M. [2 ]
机构
[1] Johns Hopkins Univ, Appl Phys Lab, Baltimore, MD 21218 USA
[2] Johns Hopkins Univ, Sch Med, Wilmer Eye Inst, Retina Div, 3400 North Charles St, Baltimore, MD 21218 USA
[3] Johns Hopkins Univ, Malone Ctr Engn Healthcare, Baltimore, MD USA
[4] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21218 USA
[5] Eye Hosp, Brazilian Ctr Vis CBV, Dept Ophthalmol, Retina Div, Brasilia, DF, Brazil
关键词
artificial intelligence; deep learning; automated diagnostics; diabetic retinopathy; MELANOCYTES; ALGORITHM; HUMANS;
D O I
10.1167/tvst.10.2.13
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Purpose: This study evaluated generative methods to potentially mitigate artificial intelligence (AI) bias when diagnosing diabetic retinopathy (DR) resulting from training data imbalance or domain generalization, which occurs when deep learning systems (DLSs) face concepts at test/inference time they were not initially trained on. Methods: The public domain Kaggle EyePACS dataset (88,692 fundi and 44,346 individuals, originally diverse for ethnicity) was modified by adding clinician-annotated labels and constructing an artificial scenario of data imbalance and domain generalization by disallowing training (but not testing) exemplars for images of retinas with DR warranting referral (DR-referable) from darker-skin individuals, who presumably have greater concentration of melanin within uveal melanocytes, on average, contributing to retinal image pigmentation. A traditional/baseline diagnostic DLS was compared against new DLSs that would use training data augmented via generative models for debiasing. Results: Accuracy (95% confidence intervals [CIs]) of the baseline diagnostics DLS for fundus images of lighter-skin individuals was 73.0% (66.9% to 79.2%) versus darker-skin of 60.5% (53.5% to 67.3%), demonstrating bias/disparity (delta = 12.5%; Welch t-test t = 2.670, P = 0.008) in AI performance across protected subpopulations. Using novel generative methods for addressing missing subpopulation training data (DR-referable darker-skin) achieved instead accuracy, for lighter-skin, of 72.0% (65.8% to 78.2%), and for darker-skin, of 71.5% (65.2% to 77.8%), demonstrating closer parity (delta = 0.5%) in accuracy across subpopulations (Welch t-test t = 0.111, P = 0.912). Conclusions: Findings illustrate how data imbalance and domain generalization can lead to disparity of accuracy across subpopulations, and show that novel generative methods of synthetic fundus images may play a role for debiasing AI. Translational Relevance: New AI methods have possible applications to address potential AI bias in DR diagnostics from fundus pigmentation, and potentially other ophthalmic DLSs too. <comment>Superscript/Subscript Available</comment> ABSTRACT Purpose: This study evaluated generative methods to potentially mitigate artificial intelligence (AI) bias when diagnosing diabetic retinopathy (DR) resulting from training data imbalance or domain generalization, which occurs when deep learning systems (DLSs) face concepts at test/inference time they were not initially trained on. Methods: The public domain Kaggle EyePACS dataset (88,692 fundi and 44,346 individuals, originally diverse for ethnicity) was modified by adding clinician-annotated labels and constructing an artificial scenario of data imbalance and domain generalization by disallowing training (but not testing) exemplars for images of retinas with DR warranting referral (DR-referable) from darker-skin individuals, who presumably have greater concentration of melanin within uveal melanocytes, on average, contributing to retinal image pigmentation. A traditional/baseline diagnostic DLS was compared against new DLSs that would use training data augmented via generative models for debiasing. Results: Accuracy (95% confidence intervals [CIs]) of the baseline diagnostics DLS for fundus images of lighter-skin individuals was 73.0% (66.9% to 79.2%) versus darker-skin of 60.5% (53.5% to 67.3%), demonstrating bias/disparity (delta = 12.5%; Welch t-test t = 2.670, P = 0.008) in AI performance across protected subpopulations. Using novel generative methods for addressing missing subpopulation training data (DR-referable darker-skin) achieved instead accuracy, for lighter-skin, of 72.0% (65.8% to 78.2%), and for darker-skin, of 71.5% (65.2% to 77.8%), demonstrating closer parity (delta = 0.5%) in accuracy across subpopulations (Welch t-test t = 0.111, P = 0.912). Conclusions: Findings illustrate how data imbalance and domain generalization can lead to disparity of accuracy across subpopulations, and show that novel generative methods of synthetic fundus images may play a role for debiasing AI. Translational Relevance: New AI methods have possible applications to address potential AI bias in DR diagnostics from fundus pigmentation, and potentially other
引用
收藏
页码:1 / 13
页数:15
相关论文
共 23 条
[1]   Low-Shot Deep Learning of Diabetic Retinopathy With Potential Applications to Address Artificial Intelligence Bias in Retinal Diagnostics and Rare Ophthalmic Diseases [J].
Burlina, Philippe ;
Paul, William ;
Mathew, Philip ;
Joshi, Neil ;
Pacheco, Katia D. ;
Bressler, Neil M. .
JAMA OPHTHALMOLOGY, 2020, 138 (10) :1070-1077
[2]   Where's Wally Now? Deep Generative and Discriminative Embeddings for Novelty Detection [J].
Burlina, Philippe ;
Joshi, Neil ;
Wang, I-Jeng .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11499-11508
[3]   Comparing humans and deep learning performance for grading AMD: A study in using universal deep features and transfer learning for automated AMD analysis [J].
Burlina, Philippe ;
Pacheco, Katia D. ;
Joshi, Neil ;
Freund, David E. ;
Bressler, Neil M. .
COMPUTERS IN BIOLOGY AND MEDICINE, 2017, 82 :80-86
[4]   Assessment of Deep Generative Models for High-Resolution Synthetic Retinal Image Generation of Age-Related Macular Degeneration [J].
Burlina, Philippe M. ;
Joshi, Neil ;
Pacheco, Katia D. ;
Liu, T. Y. Alvin ;
Bressler, Neil M. .
JAMA OPHTHALMOLOGY, 2019, 137 (03) :258-264
[5]  
Chalapathy R., 2019, ACM Comput. Surv., DOI [DOI 10.48550/ARXIV.1901.03407, DOI 10.1145/3274658]
[6]  
Cheung CMG, POLYPOIDAL CHOROIDAL, DOI [10.1016/j.ophtha.2017.11.019, DOI 10.1016/J.OPHTHA.2017.11.019]
[7]  
Cuadros Jorge, 2009, J Diabetes Sci Technol, V3, P509
[8]   Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs [J].
Gulshan, Varun ;
Peng, Lily ;
Coram, Marc ;
Stumpe, Martin C. ;
Wu, Derek ;
Narayanaswamy, Arunachalam ;
Venugopalan, Subhashini ;
Widner, Kasumi ;
Madams, Tom ;
Cuadros, Jorge ;
Kim, Ramasamy ;
Raman, Rajiv ;
Nelson, Philip C. ;
Mega, Jessica L. ;
Webster, R. .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2016, 316 (22) :2402-2410
[9]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[10]   A Style-Based Generator Architecture for Generative Adversarial Networks [J].
Karras, Tero ;
Laine, Samuli ;
Aila, Timo .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4396-4405