Ethics of large language models in medicine and medical research

被引:122
作者
Li, Hanzhou [1 ]
Moon, John T. [1 ]
Purkayastha, Saptarshi [2 ]
Celi, Leo Anthony [3 ]
Trivedi, Hari [1 ]
Gichoya, Judy W. [1 ]
机构
[1] Emory Univ, Sch Med, Dept Radiol & Imaging Sci, Atlanta, GA 30322 USA
[2] Purdue Univ, Indiana Univ, Sch Informat & Comp, Indianapolis, IN USA
[3] MIT, Cambridge, MA USA
来源
LANCET DIGITAL HEALTH | 2023年 / 5卷 / 06期
关键词
D O I
10.1016/S2589-7500(23)00083-3
中图分类号
R-058 [];
学科分类号
摘要
引用
收藏
页码:E333 / E335
页数:3
相关论文
共 6 条
  • [1] Large language models associate Muslims with violence
    Abid, Abubakar
    Farooqi, Maheen
    Zou, James
    [J]. NATURE MACHINE INTELLIGENCE, 2021, 3 (06) : 461 - 463
  • [2] Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
    Guo, Wei
    Caliskan, Aylin
    [J]. AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, : 122 - 133
  • [3] The Next Frontier: AI We Can Really Trust
    Holzinger, Andreas
    [J]. MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I, 2021, 1524 : 427 - 440
  • [4] International Association of Scientific Technical and Medical Publishers, 2021, AI ETH SCHOL COMM ST
  • [5] Lucy L., 2021, GENDER REPRESENTATIO
  • [6] ChatGPT: friend or foe?
    Patel, Sajan
    Lam, Kyle
    [J]. LANCET DIGITAL HEALTH, 2023, 5 (03): : E102 - E102