Fairness in Low Birthweight Predictive Models: Implications of Excluding Race/Ethnicity

被引:0
作者
Brown, Clare C. [1 ]
Thomsen, Michael [1 ]
Amick, Benjamin C. [2 ]
Tilford, J. Mick [1 ]
Bryant-Moore, Keneshia [3 ]
Gomez-Acevedo, Horacio [4 ]
机构
[1] Univ Arkansas Med Sci, Fay W Boozman Coll Publ Hlth, Dept Hlth Policy & Management, 4301 W Markham St Slot 820-12, Little Rock, AR 72205 USA
[2] Univ Arkansas Med Sci, Fay W Boozman Coll Publ Hlth, Dept Epidemiol, Little Rock, AR USA
[3] Univ Arkansas Med Sci, Fay W Boozman Coll Publ Hlth, Dept Hlth Behav & Hlth Educ, Little Rock, AR USA
[4] Univ Arkansas Med Sci, Coll Med, Dept Biomed Informat, Little Rock, AR USA
基金
美国国家卫生研究院;
关键词
Equity; Algorithmic fairness; Low birthweight; PRETERM BIRTH; HEALTH; BIAS; RACE; ETHNICITY; ALGORITHM; LANGUAGE; EQUITY;
D O I
10.1007/s40615-025-02296-x
中图分类号
R1 [预防医学、卫生学];
学科分类号
1004 ; 120402 ;
摘要
Context To evaluate algorithmic fairness in low birthweight predictive models. Study DesignThis study analyzed insurance claims (n = 9,990,990; 2013-2021) linked with birth certificates (n = 173,035; 2014-2021) from the Arkansas All Payers Claims Database (APCD). Methods Low birthweight (< 2500 g) predictive models included four approaches (logistic, elastic net, linear discriminate analysis, and gradient boosting machines [GMB]) with and without racial/ethnic information. Model performance was assessed overall, among Hispanic individuals, and among non-Hispanic White, Black, Native Hawaiian/Other Pacific Islander, and Asian individuals using multiple measures of predictive performance (i.e., AUC [area under the receiver operating characteristic curve] scores, calibration, sensitivity, and specificity). Results AUC scores were lower (underperformed) for Black and Asian individuals relative to White individuals. In the strongest performing model (i.e., GMB), the AUC scores for Black (0.718 [95% CI: 0.705-0.732]) and Asian (0.655 [95% CI: 0.582-0.728]) populations were lower than the AUC for White individuals (0.764 [95% CI: 0.754-0.775 ]). Model performance measured using AUC was comparable in models that included and excluded race/ethnicity; however, sensitivity (i.e., the percent of records correctly predicted as "low birthweight" among those who actually had low birthweight) was lower and calibration was weaker, suggesting underprediction for Black individuals when race/ethnicity were excluded. Conclusions This study found that racially blind models resulted in underprediction and reduced algorithmic performance, measured using sensitivity and calibration, for Black populations. Such under prediction could unfairly decrease resource allocation needed to reduce perinatal health inequities. Population health management programs should carefully consider algorithmic fairness in predictive models and associated resource allocation decisions.
引用
收藏
页数:10
相关论文
共 50 条
[1]   Racial Discrimination and Adverse Birth Outcomes: An Integrative Review [J].
Alhusen, Jeanne L. ;
Bower, Kelly M. ;
Epstein, Elizabeth ;
Sharps, Phyllis .
JOURNAL OF MIDWIFERY & WOMENS HEALTH, 2016, 61 (06) :707-720
[2]  
Arkansas Center for Health Improvement, Arkansas all-payer claims database (APCD)
[3]   Attitudes toward health care providers, collecting information about patients' race, ethnicity, and language [J].
Baker, David W. ;
Hasnain-Wynia, Romana ;
Kandula, Namratha R. ;
Thompson, Jason A. ;
Brown, E. Richard .
MEDICAL CARE, 2007, 45 (11) :1034-1042
[4]   Machine learning algorithms for predicting low birth weight in Ethiopia [J].
Bekele, Wondesen Teshome .
BMC MEDICAL INFORMATICS AND DECISION MAKING, 2022, 22 (01) :232
[5]  
Bonner Timethia J, 2024, Am J Manag Care, V30, pSP425, DOI 10.37765/ajmc.2024.89547
[6]   The Black-White Disparity in Preterm Birth: Race or Racism? [J].
Braveman, Paula .
MILBANK QUARTERLY, 2023, 101 :356-378
[7]   Racial and Ethnic Disparities Among Women Undergoing a Trial of Labor After Cesarean Delivery: Performance of the VBAC Calculator with and without Patients' Race/Ethnicity [J].
Buckley, Ayisha ;
Sestito, Stephanie ;
Ogundipe, Tonia ;
Roig, Jacqueline ;
Rosenberg, Henri Mitchell ;
Cohen, Natalie ;
Wang, Kelly ;
Stoffels, Guillaume ;
Janevic, Teresa ;
DeBolt, Chelsea ;
Cabrera, Camila ;
Cochrane, Elizabeth ;
Berkin, Jill ;
Bianco, Angela ;
Vieira, Luciana .
REPRODUCTIVE SCIENCES, 2022, 29 (07) :2030-2038
[8]   Mitigating Racial And Ethnic Bias And Advancing Health Equity In Clinical Algorithms: A Scoping Review [J].
Cary Jr, Michael P. ;
Zink, Anna ;
Wei, Sijia ;
Olson, Andrew ;
Yan, Mengying ;
Senior, Rashaud ;
Bessias, Sophia ;
Gadhoumi, Kais ;
Jean-Pierre, Genevieve ;
Wang, Demy ;
Ledbetter, Leila S. ;
Economou-Zavlanos, Nicoleta J. ;
Obermeyer, Ziad ;
Pencina, Michael J. .
HEALTH AFFAIRS, 2023, 42 (10) :1359-1368
[9]   The intricacy of structural racism measurement: A pilot development of a latent-class multidimensional measure [J].
Chantarat, Tongtan ;
Van Riper, David C. ;
Hardeman, Rachel R. .
ECLINICALMEDICINE, 2021, 40
[10]   Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care [J].
Chin, Marshall H. ;
Afsar-Manesh, Nasim ;
Bierman, Arlene S. ;
Chang, Christine ;
Colon-Rodriguez, Caleb J. ;
Dullabh, Prashila ;
Duran, Deborah Guadalupe ;
Fair, Malika ;
Hernandez-Boussard, Tina ;
Hightower, Maia ;
Jain, Anjali ;
Jordan, William B. ;
Konya, Stephen ;
Moore, Roslyn Holliday ;
Moore, Tamra Tyree ;
Rodriguez, Richard ;
Shaheen, Gauher ;
Snyder, Lynne Page ;
Srinivasan, Mithuna ;
Umscheid, Craig A. ;
Ohno-Machado, Lucila .
JAMA NETWORK OPEN, 2023, 6 (12) :E2345050