Data and model bias in artificial intelligence for healthcare applications in New Zealand

被引:6
作者
Yogarajan, Vithya [1 ]
Dobbie, Gillian [1 ]
Leitch, Sharon [2 ]
Keegan, Te Taka [3 ]
Bensemann, Joshua [1 ]
Witbrock, Michael [1 ]
Asrani, Varsha [4 ]
Reith, David [5 ]
机构
[1] Univ Auckland, Sch Comp Sci, Waipapa Taumata Rau, Auckland, New Zealand
[2] Univ Otago, Otago Med Sch, Gen Practice & Rural Hlth, Dunedin, New Zealand
[3] Univ Waikato, Sch Comp & Math Sci, Hamilton, New Zealand
[4] Univ Auckland, Fac Med & Hlth Sci, Surg & Translat Res STaR Ctr, Sch Med,Dept Surg, Auckland, New Zealand
[5] Univ Otago, Otago Med Sch, Dunedin, New Zealand
来源
FRONTIERS IN COMPUTER SCIENCE | 2022年 / 4卷
关键词
Artificial Intelligence; bias; healthcare; New Zealand; Maori; equity; CONCEPT DRIFT;
D O I
10.3389/fcomp.2022.1070493
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
IntroductionDevelopments in Artificial Intelligence (AI) are adopted widely in healthcare. However, the introduction and use of AI may come with biases and disparities, resulting in concerns about healthcare access and outcomes for underrepresented indigenous populations. In New Zealand, Maori experience significant inequities in health compared to the non-Indigenous population. This research explores equity concepts and fairness measures concerning AI for healthcare in New Zealand. MethodsThis research considers data and model bias in NZ-based electronic health records (EHRs). Two very distinct NZ datasets are used in this research, one obtained from one hospital and another from multiple GP practices, where clinicians obtain both datasets. To ensure research equality and fair inclusion of Maori, we combine expertise in Artificial Intelligence (AI), New Zealand clinical context, and te ao Maori. The mitigation of inequity needs to be addressed in data collection, model development, and model deployment. In this paper, we analyze data and algorithmic bias concerning data collection and model development, training and testing using health data collected by experts. We use fairness measures such as disparate impact scores, equal opportunities and equalized odds to analyze tabular data. Furthermore, token frequencies, statistical significance testing and fairness measures for word embeddings, such as WEAT and WEFE frameworks, are used to analyze bias in free-form medical text. The AI model predictions are also explained using SHAP and LIME. ResultsThis research analyzed fairness metrics for NZ EHRs while considering data and algorithmic bias. We show evidence of bias due to the changes made in algorithmic design. Furthermore, we observe unintentional bias due to the underlying pre-trained models used to represent text data. This research addresses some vital issues while opening up the need and opportunity for future research. DiscussionsThis research takes early steps toward developing a model of socially responsible and fair AI for New Zealand's population. We provided an overview of reproducible concepts that can be adopted toward any NZ population data. Furthermore, we discuss the gaps and future research avenues that will enable more focused development of fairness measures suitable for the New Zealand population's needs and social structure. One of the primary focuses of this research was ensuring fair inclusions. As such, we combine expertise in AI, clinical knowledge, and the representation of indigenous populations. This inclusion of experts will be vital moving forward, proving a stepping stone toward the integration of AI for better outcomes in healthcare.
引用
收藏
页数:21
相关论文
共 68 条
[41]   Understanding the Influence and Impact of Stakeholder Engagement in Patient-centered Outcomes Research: a Qualitative Study [J].
Maurer, Maureen ;
Mangrum, Rikki ;
Hilliard-Boone, Tandrea ;
Amolegbe, Andrew ;
Carman, Kristin L. ;
Forsythe, Laura ;
Mosbacher, Rachel ;
Lesch, Julie Kennedy ;
Woodward, Krista .
JOURNAL OF GENERAL INTERNAL MEDICINE, 2022, 37 (SUPPL 1) :6-13
[42]  
McCall C, 2022, LANCET, V400, P16, DOI 10.1016/S0140-6736(22)01238-7
[43]   A Survey on Bias and Fairness in Machine Learning [J].
Mehrabi, Ninareh ;
Morstatter, Fred ;
Saxena, Nripsuta ;
Lerman, Kristina ;
Galstyan, Aram .
ACM COMPUTING SURVEYS, 2021, 54 (06)
[44]  
Nathans L.L., 2012, Pract. Assess. Res. Eval, V17, pn9, DOI [DOI 10.7275/5FEX-B874, DOI 10.4159/HARVARD.9780674063297.C1]
[45]   Addressing bias in big data and AI for health care: A call for open science [J].
Norori, Natalia ;
Hu, Qiyang ;
Aellen, Florence Marcelle ;
Faraci, Francesca Dalia ;
Tzovara, Athina .
PATTERNS, 2021, 2 (10)
[46]   Dissecting racial bias in an algorithm used to manage the health of populations [J].
Obermeyer, Ziad ;
Powers, Brian ;
Vogeli, Christine ;
Mullainathan, Sendhil .
SCIENCE, 2019, 366 (6464) :447-+
[47]   Challenges in Deploying Machine Learning: A Survey of Case Studies [J].
Paleyes, Andrei ;
Urma, Raoul-Gabriel ;
Lawrence, Neil D. .
ACM COMPUTING SURVEYS, 2023, 55 (06)
[48]   Artificial intelligence and algorithmic bias: implications for health systems [J].
Panch, Trishan ;
Mattie, Heather ;
Atun, Rifat .
JOURNAL OF GLOBAL HEALTH, 2019, 9 (02)
[49]   Addressing Bias in Artificial Intelligence in Health Care [J].
Parikh, Ravi B. ;
Teeple, Stephanie ;
Navathe, Amol S. .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2019, 322 (24) :2377-2378
[50]   Machine Learning Techniques for Personalised Medicine Approaches in Immune-Mediated Chronic Inflammatory Diseases: Applications and Challenges [J].
Peng, Junjie ;
Jury, Elizabeth C. ;
Donnes, Pierre ;
Ciurtin, Coziana .
FRONTIERS IN PHARMACOLOGY, 2021, 12