Algorithmic fairness in computational medicine

被引:67
作者
Xu, Jie [1 ,2 ]
Xiao, Yunyu [2 ]
Wang, Wendy Hui [3 ]
Ning, Yue [3 ]
Shenkman, Elizabeth A. [1 ]
Bian, Jiang [1 ]
Wang, Fei [2 ]
机构
[1] Univ Florida, Dept Hlth Outcomes & Biomed Informat, Gainesville, FL USA
[2] Weill Cornell Med, Dept Populat Hlth Sci, New York, NY 10065 USA
[3] Stevens Inst Technol, Dept Comp Sci, Hoboken, NJ USA
关键词
Algorithmic fairness; Computational medicine; SELECTION-BIAS; SMOTE; CARE;
D O I
10.1016/j.ebiom.2022.104250
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Machine learning models are increasingly adopted for facilitating clinical decision-making. However, recent research has shown that machine learning techniques may result in potential biases when making decisions for peo-ple in different subgroups, which can lead to detrimental effects on the health and well-being of specific demo-graphic groups such as vulnerable ethnic minorities. This problem, termed algorithmic bias, has been extensively studied in theoretical machine learning recently. However, the impact of algorithmic bias on medicine and methods to mitigate this bias remain topics of active discussion. This paper presents a comprehensive review of algorithmic fairness in the context of computational medicine, which aims at improving medicine with computational approaches. Specifically, we overview the different types of algorithmic bias, fairness quantification metrics, and bias mitigation methods, and summarize popular software libraries and tools for bias evaluation and mitigation, with the goal of providing reference and insights to researchers and practitioners in computational medicine.Copyright (c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)
引用
收藏
页数:10
相关论文
共 91 条
[1]   Machine Learning and Health Care Disparities in Dermatology [J].
Adamson, Adewole S. ;
Smith, Avery .
JAMA DERMATOLOGY, 2018, 154 (11) :1247-1248
[2]  
Adebayo, FAIRML IS PYTHON TOO
[3]   Fairness in Machine Learning for Healthcare [J].
Ahmad, Muhammad Aurangzeb ;
Patel, Arpit ;
Eckert, Carly ;
Kumar, Vikas ;
Teredesai, Ankur .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :3529-3530
[4]  
Allen C, 2020, FAIRML HLTH TOOLS TU
[5]  
[Anonymous], 2011, P 17 ACM SIGKDD INT, DOI DOI 10.1145/2020408.2020488
[6]  
[Anonymous], 2011, P 4 ACM WORKSHOP SEC, DOI DOI 10.1145/2046684.2046692
[7]  
Bantilan, LIB IMPLEMENTS FAIRN
[8]   OUTCOME BIAS IN DECISION EVALUATION [J].
BARON, J ;
HERSHEY, JC .
JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1988, 54 (04) :569-579
[9]   AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias [J].
Bellamy, R. K. E. ;
Dey, K. ;
Hind, M. ;
Hoffman, S. C. ;
Houde, S. ;
Kannan, K. ;
Lohia, P. ;
Martino, J. ;
Mehta, S. ;
Mojsilovie, A. ;
Nagar, S. ;
Ramamurthy, K. Natesan ;
Richards, J. ;
Saha, D. ;
Sattigeri, P. ;
Singh, M. ;
Varshney, K. R. ;
Zhang, Y. .
IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2019, 63 (4-5)
[10]  
Bird S, 2020, Tech Rep MSR-TR-2020-32