Algorithmic individual fairness and healthcare: a scoping review

被引:1
作者
Anderson, Joshua W. [1 ]
Visweswaran, Shyam [1 ,2 ]
机构
[1] Univ Pittsburgh, Intelligent Syst Program, 4741 Baum Blvd, Pittsburgh, PA 15213 USA
[2] Univ Pittsburgh, Biomed Informat, Pittsburgh, PA 15213 USA
基金
美国国家卫生研究院;
关键词
algorithmic fairness; individual fairness; health disparities; healthcare; DISPARITIES;
D O I
10.1093/jamiaopen/ooae149
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Objectives Statistical and artificial intelligence algorithms are increasingly being developed for use in healthcare. These algorithms may reflect biases that magnify disparities in clinical care, and there is a growing need for understanding how algorithmic biases can be mitigated in pursuit of algorithmic fairness. We conducted a scoping review on algorithmic individual fairness (IF) to understand the current state of research in the metrics and methods developed to achieve IF and their applications in healthcare.Materials and Methods We searched four databases: PubMed, ACM Digital Library, IEEE Xplore, and medRxiv for algorithmic IF metrics, algorithmic bias mitigation, and healthcare applications. Our search was restricted to articles published between January 2013 and November 2024. We identified 2498 articles through database searches and seven additional articles, of which 32 articles were included in the review. Data from the selected articles were extracted, and the findings were synthesized.Results Based on the 32 articles in the review, we identified several themes, including philosophical underpinnings of fairness, IF metrics, mitigation methods for achieving IF, implications of achieving IF on group fairness and vice versa, and applications of IF in healthcare.Discussion We find that research of IF is still in their early stages, particularly in healthcare, as evidenced by the limited number of relevant articles published between 2013 and 2024. While healthcare applications of IF remain sparse, growth has been steady in number of publications since 2012. The limitations of group fairness further emphasize the need for alternative approaches like IF. However, IF itself is not without challenges, including subjective definitions of similarity and potential bias encoding from data-driven methods. These findings, coupled with the limitations of the review process, underscore the need for more comprehensive research on the evolution of IF metrics and definitions to advance this promising field.Conclusion While significant work has been done on algorithmic IF in recent years, the definition, use, and study of IF remain in their infancy, especially in healthcare. Future research is needed to comprehensively apply and evaluate IF in healthcare. The use of algorithms in healthcare holds the potential to improve care delivery and reduce costs. However, these algorithms can sometimes reflect biases, leading to unfair treatment of individuals, particularly those from marginalized groups. This study reviews the concept of algorithmic individual fairness (IF), which ensures that similar individuals are treated similarly. The review identifies various philosophies and methods used to achieve IF and highlights how they can address biases in healthcare. While IF approaches are still in their early stages, they show promise in reducing disparities in healthcare. The findings emphasize the need for further research to enhance fairness in healthcare algorithms and ensure equitable treatment for individuals.
引用
收藏
页数:8
相关论文
共 53 条
[1]   Machine Learning and Health Care Disparities in Dermatology [J].
Adamson, Adewole S. ;
Smith, Avery .
JAMA DERMATOLOGY, 2018, 154 (11) :1247-1248
[2]   Black Box Fairness Testing of Machine Learning Models [J].
Aggarwal, Aniya ;
Lohia, Pranay ;
Nagar, Seema ;
Dey, Kuntal ;
Saha, Diptikalyan .
ESEC/FSE'2019: PROCEEDINGS OF THE 2019 27TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2019, :625-635
[3]  
Anderson Michael, 2019, AMA J Ethics, V21, pE125, DOI 10.1001/amajethics.2019.125
[4]  
Arksey H., 2005, INT J SOC RES METHOD, V8, P19, DOI DOI 10.1080/1364557032000119616
[5]  
Bechavod Y, 2020, ADV NEUR IN, V33
[6]   On the Apparent Conflict Between Individual and Group Fairness [J].
Binns, Reuben .
FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, :514-524
[7]   Fairify: Fairness Verification of Neural Networks [J].
Biswas, Sumon ;
Rajan, Hridesh .
2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE, 2023, :1546-1558
[8]  
Bohr A, 2020, Artificial intelligence in healthcare
[9]  
Bre K., 2022, BMJ Health Care Inform, V29, P1
[10]   Fairness in Machine Learning: A Survey [J].
Caton, Simon ;
Haas, Christian .
ACM COMPUTING SURVEYS, 2024, 56 (07) :1-38