Ethics of large language models in medicine and medical research

被引:164
作者
Li, Hanzhou [1 ]
Moon, John T. [1 ]
Purkayastha, Saptarshi [2 ]
Celi, Leo Anthony [3 ]
Trivedi, Hari [1 ]
Gichoya, Judy W. [1 ]
机构
[1] Emory Univ, Sch Med, Dept Radiol & Imaging Sci, Atlanta, GA 30322 USA
[2] Purdue Univ, Indiana Univ, Sch Informat & Comp, Indianapolis, IN USA
[3] MIT, Cambridge, MA USA
关键词
D O I
10.1016/S2589-7500(23)00083-3
中图分类号
R-058 [];
学科分类号
摘要
引用
收藏
页码:E333 / E335
页数:3
相关论文
共 6 条
[1]   Large language models associate Muslims with violence [J].
Abid, Abubakar ;
Farooqi, Maheen ;
Zou, James .
NATURE MACHINE INTELLIGENCE, 2021, 3 (06) :461-463
[2]   Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases [J].
Guo, Wei ;
Caliskan, Aylin .
AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, :122-133
[3]   The Next Frontier: AI We Can Really Trust [J].
Holzinger, Andreas .
MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I, 2021, 1524 :427-440
[4]  
International Association of Scientific Technical and Medical Publishers, 2021, AI ETH SCHOL COMM ST
[5]  
Lucy L, 2021, Gender and representation bias in GPT-3 generated stories
[6]  
The Lancet Digital Health, 2023, Lancet Digit Health, V5