Even with ChatGPT, race matters

被引:20
作者
Amin, Kanhai S. [1 ]
Forman, Howard P. [2 ]
Davis, Melissa A. [2 ]
机构
[1] Yale Coll, New Haven, CT USA
[2] Yale Sch Med, Dept Radiol & Biomed Imaging, New Haven, CT 06510 USA
关键词
ChatGPT; Large language models; Health equity; Radiology report; Implicit bias;
D O I
10.1016/j.clinimag.2024.110113
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: Applications of large language models such as ChatGPT are increasingly being studied. Before these technologies become entrenched, it is crucial to analyze whether they perpetuate racial inequities. Methods: We asked Open AI's ChatGPT-3.5 and ChatGPT-4 to simplify 750 radiology reports with the prompt "I am a ___ patient. Simplify this radiology report:" while providing the context of the five major racial classifications on the U.S. census: White, Black or African American, American Indian or Alaska Native, Asian, and Native Hawaiian or other Pacific Islander. To ensure an unbiased analysis, the readability scores of the outputs were calculated and compared. Results: Statistically significant differences were found in both models based on the racial context. For ChatGPT3.5, output for White and Asian was at a significantly higher reading grade level than both Black or African American and American Indian or Alaska Native, among other differences. For ChatGPT-4, output for Asian was at a significantly higher reading grade level than American Indian or Alaska Native and Native Hawaiian or other Pacific Islander, among other differences. Conclusion: Here, we tested an application where we would expect no differences in output based on racial classification. Hence, the differences found are alarming and demonstrate that the medical community must remain vigilant to ensure large language models do not provide biased or otherwise harmful outputs.
引用
收藏
页数:4
相关论文
共 9 条
[1]   Large language models as a source of health information: Are they patient-centered? A longitudinal analysis [J].
Amin, Kanhai ;
Doshi, Rushabh ;
Forman, Howard P. .
HEALTHCARE-THE JOURNAL OF DELIVERY SCIENCE AND INNOVATION, 2024, 12 (01)
[2]  
Amin K, 2023, YALE J BIOL MED, V96, P407, DOI 10.59249/NKOY5498
[3]  
Amin KS, 2023, RADIOLOGY, V309, DOI 10.1148/radiol.232561
[4]  
[Anonymous], Bard Privacy Health Hub
[5]  
[Anonymous], PRIVACY POLICY
[6]   Harnessing the Promise of Artificial Intelligence Responsibly [J].
Dorr, David A. ;
Adams, Laura ;
Embi, Peter .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2023, 329 (16) :1347-1348
[7]  
Doshi R, 2023, medRxiv
[8]  
Hanna J. J., 2023, medRxiv, DOI DOI 10.1101/2023.08.28.23294730
[9]  
Zhang A., 2023, medRxiv, DOI [10.1101/2023.11.14.23298525, DOI 10.1101/2023.11.14.23298525]