Manipulation of sources of bias in AI device development

被引:0
作者
Burgon, Alexis [1 ]
Zhang, Yuhang [1 ]
Sahiner, Berkman [1 ]
Petrick, Nicholas [1 ]
Cha, Kenny H. [1 ]
Samala, Ravi K. [1 ]
机构
[1] US FDA, Ctr Devices & Radiol Hlth, Off Sci & Engn Labs, 10903 New Hampshire Ave, Silver Spring, MD 20993 USA
来源
COMPUTER-AIDED DIAGNOSIS, MEDICAL IMAGING 2024 | 2024年 / 12927卷
基金
美国国家卫生研究院;
关键词
artificial intelligence; bias; bias mitigation; fairness; image classification; machine learning;
D O I
10.1117/12.3008267
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increased use of artificial intelligence (AI)-enabled medical devices in clinical practice has driven the need for better understanding of bias in AI applications. While there are many bias mitigation methods, determining which method is appropriate in each situation is not trivial due to a lack of objective bias mitigation assessment approaches. This study presents an approach to manipulate sources of bias in order to facilitate the evaluation of bias mitigation methods during AI development. This approach amplifies sources of bias in the data by varying disease prevalence between subgroups to promote associations between patient subgroups and specific disease classes. The approach can be adjusted to affect the degree to which bias is amplified. In this study, bias amplification was used to vary the bias of COVID-19 classification models using chest X-ray data between patient subgroups defined by sex (female or male) or race (black or white). Analysis of subgroup sensitivity shows that the proposed bias amplification approach results in the sensitivity of one subgroup increasing while the sensitivity of the other subgroup is decreasing, despite a consistent difference in subgroup area under the receiver operating characteristic curve. For example, the amplification of sex-related bias increased the sensitivity for COVID-19 classification of the female subgroup from 0.65 +/- 0.04 to 0.90 +/- 0.02 while decreasing the sensitivity of the male subgroup from 0.66 +/- 0.03 to 0.18 +/- 0.02. This shows that the bias amplification approaches promote bias by causing the model to enhance existing correlations between a patient's subgroup and their COVID-19 status, which results in biased performance in a clinical setting. The approach presented in this study was found to systematically amplify sources of bias for our dataset and AI model, and can thus be used to evaluate the effectiveness of potential bias mitigation methods.
引用
收藏
页数:6
相关论文
共 15 条
  • [11] Irvin J, 2019, AAAI CONF ARTIF INTE, P590
  • [12] Regulatory considerations for medical imaging AI/ML devices in the United States: concepts and challenges
    Petrick, Nicholas
    Chen, Weijie
    Delfino, Jana G.
    Gallas, Brandon D.
    Kang, Yanna
    Krainak, Daniel
    Sahiner, Berkman
    Samala, Ravi K.
    [J]. JOURNAL OF MEDICAL IMAGING, 2023, 10 (05)
  • [13] Schwartz R., 2022, Towards a standard for identifying and managing bias in artificial intelligence, DOI 10.6028/NIST.SP.1270
  • [14] Sowrirajan H, 2021, PR MACH LEARN RES, V143, P728
  • [15] Income Inequality, Economic Complexity, and Renewable Energy Impacts in Controlling Consumption-Based Carbon Emissions
    Ulucak, Recep
    Danish
    Zhang, Yaoqi
    Chen, Rui
    Qiu, Yiting
    [J]. EVALUATION REVIEW, 2024, 48 (01) : 119 - 142