Ethical and Bias Considerations in Artificial Intelligence/Machine Learning

被引:40
作者
Hanna, Matthew G. [1 ,2 ]
Pantanowitz, Liron [1 ,2 ]
Jackson, Brian [3 ,4 ]
Palmer, Octavia [1 ,2 ]
Visweswaran, Shyam [5 ]
Pantanowitz, Joshua [6 ]
Deebajah, Mustafa [7 ]
Rashidi, Hooman H. [1 ,2 ]
机构
[1] Univ Pittsburgh, Med Ctr, Dept Pathol, Pittsburgh, PA 15260 USA
[2] Univ Pittsburgh, Computat Pathol & AI Ctr Excellence CPACE, Pittsburgh, PA 15260 USA
[3] Univ Utah, Dept Pathol, Salt Lake City, UT USA
[4] ARUP Labs, Salt Lake City, UT USA
[5] Univ Pittsburgh, Dept Biomed Informat, Pittsburgh, PA USA
[6] Univ Pittsburgh, Med Sch, Pittsburgh, PA USA
[7] Cleveland Clin, Dept Pathol, Cleveland, OH USA
关键词
artificial intelligence; bias; computational pathology; ethics; machine learning; pathology; RADIOMICS FEATURES; HEALTH-CARE; RISK; SCORE; PREDICTION; GUIDELINES; IMAGES;
D O I
10.1016/j.modpat.2024.100686
中图分类号
R36 [病理学];
学科分类号
100104 ;
摘要
As artificial intelligence (AI) gains prominence in pathology and medicine, the ethical implications and potential biases within such integrated AI models will require careful scrutiny. Ethics and bias are important considerations in our practice settings, especially as an increased number of machine learning (ML) systems are being integrated within our various medical domains. Such ML-based systems have demonstrated remarkable capabilities in specified tasks such as, but not limited to, image recognition, natural language processing, and predictive analytics. However, the potential bias that may exist within such AI-ML models can also inadvertently lead to unfair and potentially detrimental outcomes. The source of bias within such ML models can be due to numerous factors but is typically categorized into 3 main buckets (data bias, development bias, and interaction bias). These could be due to the training data, algorithmic bias, feature engineering and selection issues, clinic and institutional bias (ie, practice variability), reporting bias, and temporal bias (ie, changes in technology, clinical practice, or disease patterns). Therefore, despite the potential of these AI-ML applications, their deployment in our day-to-day practice also raises noteworthy ethical concerns. To address ethics and bias in medicine, a comprehensive evaluation process is required, which will encompass all aspects of such systems, from model development through clinical deployment. Addressing these biases is crucial to ensure that AI-ML systems remain fair, transparent, and beneficial to all. This review will discuss the relevant ethical and bias considerations in AI-ML specifically within the pathology and medical domain. (c) 2024 THE AUTHORS. Published by Elsevier Inc. on behalf of the United States & Canadian Academy of Pathology. This is an open access article under the CC BY-NC-ND license (http://creativecommons. org/licenses/by-nc-nd/4.0/).
引用
收藏
页数:13
相关论文
共 77 条
[1]   Regulating Artificial Intelligence for a Successful Pathology Future [J].
Allen, Timothy Craig .
ARCHIVES OF PATHOLOGY & LABORATORY MEDICINE, 2019, 143 (10) :1175-1179
[2]  
Angwin J., 2022, Ethics of data and analytics
[3]  
[Anonymous], 2018, The New York Times
[4]  
[Anonymous], 1978, The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research
[5]  
Beauchamp T., 1989, Principles of Biomedical Ethics, V3rd
[6]  
Bogen M., 2018, Technical report
[7]   An Individualized Conditional Survival Calculator for Patients with Rectal Cancer [J].
Bowles, Tawnya L. ;
Hu, Chung-Yuan ;
You, Nancy Y. ;
Skibber, John M. ;
Rodriguez-Bigas, Miguel A. ;
Chang, George J. .
DISEASES OF THE COLON & RECTUM, 2013, 56 (05) :551-559
[8]  
Braun Lundy., 2014, Breathing Race into the Machine: the Surprising Career of the Spirometer from Plantation to Genetics, DOI DOI 10.5749/MINNESOTA/9780816683574.001.0001
[9]  
Breast Cancer Risk Assessment Tool: Online Calculator (The Gail Model), The Breast Cancer Risk Assessment Tool
[10]  
Buolamwini J., 2018, FACCT