Human-Centered Design to Address Biases in Artificial Intelligence

被引:58
作者
Chen, You [1 ,2 ,8 ]
Clayton, Ellen Wright [3 ,4 ,5 ]
Novak, Laurie Lovett [1 ]
Anders, Shilo [1 ,2 ,6 ]
Malin, Bradley [1 ,2 ,4 ,7 ]
机构
[1] Vanderbilt Univ, Dept Biomed Informat, Med Ctr, Nashville, TN USA
[2] Vanderbilt Univ, Dept Comp Sci, Nashville, TN USA
[3] Vanderbilt Univ, Law Sch, Nashville, TN USA
[4] Vanderbilt Univ, Ctr Biomed Eth & Soc, Med Ctr, Nashville, TN USA
[5] Vanderbilt Univ, Dept Pediat, Med Ctr, Nashville, TN USA
[6] Vanderbilt Univ, Dept Anesthesiol, Med Ctr, Nashville, TN USA
[7] Vanderbilt Univ, Dept Biostat, Med Ctr, Nashville, TN USA
[8] Vanderbilt Univ, Med Ctr, Dept Biomed Informat, 2525 West End Ave, Nashville, TN 37203 USA
基金
美国国家卫生研究院;
关键词
artificial intelligence; human-centered AI; biases; AI; care; biomedical; research; application; human-centered; development; design; patient; health; benefits; HEALTH DISPARITIES; PRECISION MEDICINE; PREDICTION; SEPSIS; EQUITY; MODEL;
D O I
10.2196/43251
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. By recognizing and addressing biases at each stage of the AI life cycle, AI can achieve its potential in health care.
引用
收藏
页数:10
相关论文
共 67 条
[1]  
Adadi A., 2020, Embedded Systems and Artificial Intelligence., V1076, P327, DOI [DOI 10.1007/978-981-15-0947-6, 10.1007/978-981-15-0947-6, DOI 10.1007/978-981-15-0947-631]
[2]   Addressing health disparities in cancer with genomics [J].
Balogun, Onyinye D. ;
Olopade, Olufunmilayo I. .
NATURE REVIEWS GENETICS, 2021, 22 (10) :621-622
[3]   Whose AI? How different publics think about AI and its social impacts [J].
Bao, Luye ;
Krause, Nicole M. ;
Calice, Mikhaila N. ;
Scheufele, Dietram A. ;
Wirz, Christopher D. ;
Brossard, Dominique ;
Newman, Todd P. ;
Xenos, Michael A. .
COMPUTERS IN HUMAN BEHAVIOR, 2022, 130
[4]   Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems [J].
Ben Shneiderman .
ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2020, 10 (04)
[5]  
Benda Natalie C., 2021, Proceedings of the CHI Conference on Human Factors in Computing Systems, Yokohama, V2021, P1
[6]   Research on the Clinical Translation of Health Care Machine Learning: Ethicists Experiences on Lessons Learned [J].
Blumenthal-Barby, Jennifer ;
Lang, Benjamin ;
Dorfman, Natalie ;
Kaplan, Holland ;
Hooper, William B. ;
Kostick-Quenet, Kristin .
AMERICAN JOURNAL OF BIOETHICS, 2022, 22 (05) :1-3
[7]  
Bond RR, 2019, ROCHI 2019 INT C HUM, P2
[8]   Primer on an ethics of AI-based decision support systems in the clinic [J].
Braun, Matthias ;
Hummel, Patrik ;
Beck, Susanne ;
Dabrock, Peter .
JOURNAL OF MEDICAL ETHICS, 2021, 47 (12) :E3
[9]   Health disparities and health equity: Concepts and measurement [J].
Braveman, P .
ANNUAL REVIEW OF PUBLIC HEALTH, 2006, 27 :167-194
[10]   Health Disparities and Health Equity: The Issue Is Justice [J].
Braveman, Paula A. ;
Kumanyika, Shiriki ;
Fielding, Jonathan ;
LaVeist, Thomas ;
Borrell, Luisa N. ;
Manderscheid, Ron ;
Troutman, Adewale .
AMERICAN JOURNAL OF PUBLIC HEALTH, 2011, 101 :S149-S155