Discrimination, Bias, Fairness, and Trustworthy AI

被引:31
作者
Varona, Daniel [1 ]
Suarez, Juan Luis [1 ]
机构
[1] CulturePlex Lab, London, ON N6A 3K6, Canada
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 12期
关键词
discrimination; bias; fairness; trustworthy ADMS; principled AI; social impact of AI; ethics and AI;
D O I
10.3390/app12125826
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Featured Application To understand the multiple definitions available for the variables "Discrimination", "Bias", "Fairness", and "Trustworthy AI" in the context of the social impact of algorithmic decision-making systems (ADMS), pursuing to reach consensus as working variables for the referred context. In this study, we analyze "Discrimination", "Bias", "Fairness", and "Trustworthiness" as working variables in the context of the social impact of AI. It has been identified that there exists a set of specialized variables, such as security, privacy, responsibility, etc., that are used to operationalize the principles in the Principled AI International Framework. These variables are defined in such a way that they contribute to others of more general scope, for example, the ones studied in this study, in what appears to be a generalization-specialization relationship. Our aim in this study is to comprehend how we can use available notions of bias, discrimination, fairness, and other related variables that will be assured during the software project's lifecycle (security, privacy, responsibility, etc.) when developing trustworthy algorithmic decision-making systems (ADMS). Bias, discrimination, and fairness are mainly approached with an operational interest by the Principled AI International Framework, so we included sources from outside the framework to complement (from a conceptual standpoint) their study and their relationship with each other.
引用
收藏
页数:13
相关论文
共 51 条
[1]  
Abolfazlian K., 2020, IFIP Advances in Information and Communication Technology, V584, P15, DOI [10.1007/978-3-030-49186-4_2, DOI 10.1007/978-3-030-49186-4_2]
[2]  
Abrieu R., 2018, The future of work and education for the digital age: technological innovation and the future of work: a view from the South
[3]  
Access Now Organization Human Rights in the Age of AI, 2018, HUM RIGHTS AG AI
[4]  
[Anonymous], 2015, CHAOS Report 2015
[5]  
[Anonymous], 2016, The White House Blog
[6]   Big Data's Disparate Impact [J].
Barocas, Solon ;
Selbst, Andrew D. .
CALIFORNIA LAW REVIEW, 2016, 104 (03) :671-732
[7]  
Brundage M., 2020, ARXIV
[8]   Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved [J].
Chen, Jiahao ;
Kallus, Nathan ;
Mao, Xiaojie ;
Svacha, Geoffry ;
Udell, Madeleine .
FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2019, :339-348
[9]   Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments [J].
Chouldechova, Alexandra .
BIG DATA, 2017, 5 (02) :153-163
[10]  
Demiaux V., 2017, CAN HUMANS KEEP UPPE