Evaluating and addressing demographic disparities in medical large language models: a systematic review

被引:0
作者
Omar, Mahmud [1 ]
Sorin, Vera [2 ]
Agbareia, Reem [3 ]
Apakama, Donald U. [4 ]
Soroush, Ali [1 ]
Sakhuja, Ankit [1 ,4 ]
Freeman, Robert [1 ]
Horowitz, Carol R. [5 ]
Richardson, Lynne D. [5 ]
Nadkarni, Girish N. [1 ,4 ]
Klang, Eyal [1 ,4 ]
机构
[1] Icahn Sch Med Mt Sinai, Div Data Driven & Digital Med D3M, New York, NY 10029 USA
[2] Mayo Clin, Diagnost Radiol, Rochester, MN USA
[3] Hadassah Med Ctr, Ophthalmol Dept, Jerusalem, Israel
[4] Icahn Sch Med Mt Sinai, Charles Bronfman Inst Personalized Med, New York, NY USA
[5] Icahn Sch Med Mt Sinai, Inst Hlth Equ Res, New York, NY USA
关键词
D O I
10.1186/s12939-025-02419-0
中图分类号
R1 [预防医学、卫生学];
学科分类号
1004 ; 120402 ;
摘要
BackgroundLarge language models are increasingly evaluated for use in healthcare. However, concerns about their impact on disparities persist. This study reviews current research on demographic biases in large language models to identify prevalent bias types, assess measurement methods, and evaluate mitigation strategies.MethodsWe conducted a systematic review, searching publications from January 2018 to July 2024 across five databases. We included peer-reviewed studies evaluating demographic biases in large language models, focusing on gender, race, ethnicity, age, and other factors. Study quality was assessed using the Joanna Briggs Institute Critical Appraisal Tools.ResultsOur review included 24 studies. Of these, 22 (91.7%) identified biases. Gender bias was the most prevalent, reported in 15 of 16 studies (93.7%). Racial or ethnic biases were observed in 10 of 11 studies (90.9%). Only two studies found minimal or no bias in certain contexts. Mitigation strategies mainly included prompt engineering, with varying effectiveness. However, these findings are tempered by a potential publication bias, as studies with negative results are less frequently published.ConclusionBiases are observed in large language models across various medical domains. While bias detection is improving, effective mitigation strategies are still developing. As LLMs increasingly influence critical decisions, addressing these biases and their resultant disparities is essential for ensuring fair artificial intelligence systems. Future research should focus on a wider range of demographic factors, intersectional analyses, and non-Western cultural contexts.
引用
收藏
页数:10
相关论文
共 48 条
  • [1] Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
    Abd-alrazaq, Alaa
    AlSaad, Rawan
    Alhuwail, Dari
    Ahmed, Arfan
    Healy, Padraig Mark
    Latifi, Syed
    Aziz, Sarah
    Damseh, Rafat
    Alrazak, Sadam Alabed
    Sheikh, Javaid
    [J]. JMIR MEDICAL EDUCATION, 2023, 9
  • [2] Large language models show human- like content biases in transmission chain experiments
    Acerbi, Alberto
    Stubbersfield, Joseph M.
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2023, 120 (44)
  • [3] Agbareia R, 2024, medRxiv, DOI [10.1101/2024.11.02.24316624, 10.1101/2024.11.02.24316624v1, DOI 10.1101/2024.11.02.24316624V1]
  • [4] Even with ChatGPT, race matters
    Amin, Kanhai S.
    Forman, Howard P.
    Davis, Melissa A.
    [J]. CLINICAL IMAGING, 2024, 109
  • [5] Mixed methods assessment of the influence of demographics on medical advice of ChatGPT
    Andreadis, Katerina
    Newman, Devon R.
    Twan, Chelsea
    Shunk, Amelia
    Mann, Devin M.
    Stevens, Elizabeth R.
    [J]. JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, : 2002 - 2009
  • [6] Using artificial intelligence to create diverse and inclusive medical case vignettes for education
    Bakkum, Michiel J.
    Hartjes, Marielle G.
    Piet, Joost D.
    Donker, Erik M.
    Likic, Robert
    Sanz, Emilio
    de Ponti, Fabrizio
    Verdonk, Petra
    Richir, Milan C.
    van Agtmael, Michiel A.
    Tichelaar, Jelle
    [J]. BRITISH JOURNAL OF CLINICAL PHARMACOLOGY, 2024, 90 (03) : 640 - 648
  • [7] Investigating Gender Bias in BERT
    Bhardwaj, Rishabh
    Majumder, Navonil
    Poria, Soujanya
    [J]. COGNITIVE COMPUTATION, 2021, 13 (04) : 1008 - 1018
  • [8] Measuring and Mitigating Gender Bias in Legal Contextualized Language Models
    Bozdag, Mustafa
    Sevim, Nurullah
    Koc, Aykut
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (04)
  • [9] Assessment of the bias of artificial intelligence generated images and large language models on their depiction of a surgeon
    Cevik, Jevan
    Lim, Bryan
    Seth, Ishith
    Sofiadellis, Foti
    Ross, Richard J.
    Cuomo, Roberto
    Rozen, Warren M.
    [J]. ANZ JOURNAL OF SURGERY, 2024, 94 (03) : 287 - 294
  • [10] Cultural neuroscience and psychopathology: prospects for cultural psychiatry
    Choudhury, Suparna
    Kirmayer, Laurence J.
    [J]. CULTURAL NEUROSCIENCE: CULTURAL INFLUENCES ON BRAIN FUNCTION, 2009, 178 : 263 - 283