Exploring Bias and Prediction Metrics to Characterise the Fairness of Machine Learning for Equity-Centered Public Health Decision-Making: A Narrative Review

被引:0
作者
Raza, Shaina [1 ]
Shaban-Nejad, Arash [2 ]
Dolatabadi, Elham [3 ]
Mamiya, Hiroshi [4 ]
机构
[1] Vector Inst Artificial Intelligence, Toronto, ON M5G 0C6, Canada
[2] Univ Tennessee, Coll Med, Dept Pediat, Hlth Sci Ctr, Memphis, TN 38103 USA
[3] York Univ, Fac Hlth, Sch Hlth Policy, Toronto, ON M3J 1P3, Canada
[4] McGill Univ, Dept Epidemiol Biostat & Occupat Hlth, Montreal, PQ H3A 1G1, Canada
来源
IEEE ACCESS | 2024年 / 12卷
基金
加拿大健康研究院;
关键词
Public healthcare; Data models; Reviews; Prediction algorithms; Measurement; Medical services; Diseases; Decision making; Systematics; Predictive models; Public health equity; evaluation; machine learning; fairness; SELECTION BIAS; BIG DATA; CARE; OPPORTUNITIES; RISKS;
D O I
10.1109/ACCESS.2024.3509353
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid advancement of Machine Learning (ML) represents novel opportunities to enhance public health research, surveillance, and decision-making. However, there is a lack of comprehensive understanding of algorithmic bias - systematic errors in predicted population health outcomes - resulting from the public health application of ML. The objective of this narrative review is to explore the types of bias generated by ML and quantitative metrics to assess these biases. We performed search on PubMed, MEDLINE, IEEE (Institute of Electrical and Electronics Engineers), ACM (Association for Computing Machinery) Digital Library, Science Direct, and Springer Nature. We used keywords to identify studies describing types of bias and metrics to measure these in the domain of ML and public and population health published in English between 2008 and 2023, inclusive. A total of 72 articles met the inclusion criteria. Our review identified the commonly described types of bias and quantitative metrics to assess these biases from an equity perspective. The review will help formalize the evaluation framework for ML on public health from an equity perspective.
引用
收藏
页码:180815 / 180829
页数:15
相关论文
共 115 条
[71]  
Myers MF, 2000, ADV PARASIT, V47, P309, DOI 10.1016/S0065-308X(00)47013-2
[72]  
Neth H., 2018, Social Psychol. Decis. Sci., Comput. Softw.
[73]  
Nielsen A., 2020, Practical Fairness
[74]   Bias in data-driven artificial intelligence systems-An introductory survey [J].
Ntoutsi, Eirini ;
Fafalios, Pavlos ;
Gadiraju, Ujwal ;
Iosifidis, Vasileios ;
Nejdl, Wolfgang ;
Vidal, Maria-Esther ;
Ruggieri, Salvatore ;
Turini, Franco ;
Papadopoulos, Symeon ;
Krasanakis, Emmanouil ;
Kompatsiaris, Ioannis ;
Kinder-Kurlanda, Katharina ;
Wagner, Claudia ;
Karimi, Fariba ;
Fernandez, Miriam ;
Alani, Harith ;
Berendt, Bettina ;
Kruegel, Tina ;
Heinze, Christian ;
Broelemann, Klaus ;
Kasneci, Gjergji ;
Tiropanis, Thanassis ;
Staab, Steffen .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2020, 10 (03)
[75]   Dissecting racial bias in an algorithm used to manage the health of populations [J].
Obermeyer, Ziad ;
Powers, Brian ;
Vogeli, Christine ;
Mullainathan, Sendhil .
SCIENCE, 2019, 366 (6464) :447-+
[76]   A novel healthcare resource allocation decision support tool: A forecasting-simulation-optimization approach [J].
Ordu, Muhammed ;
Demir, Eren ;
Tofallis, Chris ;
Gunal, Murat M. .
JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, 2021, 72 (03) :485-500
[77]  
Oswald ME, 2004, COGNITIVE ILLUSIONS: A HANDBOOK ON FALLACIES AND BIASES IN THINKING, JUDGEMENT AND MEMORY, P79
[78]   Artificial intelligence: opportunities and risks for public health [J].
Panch, Trishan ;
Pearson-Stuttard, Jonathan ;
Greaves, Felix ;
Atun, Rifat .
LANCET DIGITAL HEALTH, 2019, 1 (01) :E13-E14
[79]   Ketogenic Diet for Obesity: Friend or Foe? [J].
Paoli, Antonio .
INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH, 2014, 11 (02) :2092-2107
[80]   Addressing Bias in Artificial Intelligence in Health Care [J].
Parikh, Ravi B. ;
Teeple, Stephanie ;
Navathe, Amol S. .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2019, 322 (24) :2377-2378