Algorithmic fairness in social context

被引:0
作者
Huang Y. [1 ]
Liu W. [1 ]
Gao W. [2 ]
Lu X. [1 ]
Liang X. [1 ]
Yang Z. [2 ]
Li H. [2 ]
Ma L. [1 ]
Tang S. [1 ]
机构
[1] Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, No. 15 Yucai Road, Qixing District, Guangxi, Guilin
[2] Research Center for Advanced Computer Systems, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road, Haidian District, Beijing
来源
BenchCouncil Transactions on Benchmarks, Standards and Evaluations | 2023年 / 3卷 / 03期
基金
中国国家自然科学基金;
关键词
Bias; Discrimination; Fairness algorithms; Fairness datasets; Fairness measure; Social fairness;
D O I
10.1016/j.tbench.2023.100137
中图分类号
学科分类号
摘要
Algorithmic fairness research is currently receiving significant attention, aiming to ensure that algorithms do not discriminate between different groups or individuals with similar characteristics. However, with the popularization of algorithms in all aspects of society, algorithms have changed from mere instruments to social infrastructure. For instance, facial recognition algorithms are widely used to provide user verification services and have become an indispensable part of many social infrastructures like transportation, health care, etc. As an instrument, an algorithm needs to pay attention to the fairness of its behavior. However, as a social infrastructure, it needs to pay even more attention to its impact on social fairness. Otherwise, it may exacerbate existing inequities or create new ones. For example, if an algorithm treats all passengers equally and eliminates special seats for pregnant women in the interest of fairness, it will increase the risk of pregnant women taking public transport and indirectly damage their right to fair travel. Therefore, algorithms have the responsibility to ensure social fairness, not just within their operations. It is now time to expand the concept of algorithmic fairness beyond mere behavioral equity, assessing algorithms in a broader societal context, and examining whether they uphold and promote social fairness. This article analyzes the current status and challenges of algorithmic fairness from three key perspectives: fairness definition, fairness dataset, and fairness algorithm. Furthermore, the potential directions and strategies to promote the fairness of the algorithm are proposed. © 2023 The Authors
引用
收藏
相关论文
共 85 条
[1]  
Dieterich W., Mendoza C., Brennan T., COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity, Vol. 7, (2016)
[2]  
Wu Y., Cao J., Xu G., FASTER: A dynamic fairness-assurance strategy for session-based recommender systems, ACM Trans. Inf. Syst., (2023)
[3]  
Estiri H., Strasser Z.H., Rashidian S., Klann J.G., Wagholikar K.B., McCoy T.H., Murphy S.N., An objective framework for evaluating unrecognized bias in medical AI models predicting COVID-19 outcomes, J. Am. Med. Inf. Assoc., 29, 8, pp. 1334-1341, (2022)
[4]  
Chen Z., Zhang J.M., Sarro F., Harman M., A comprehensive empirical study of bias mitigation methods for machine learning classifiers, ACM Trans. Softw. Eng. Methodol., (2023)
[5]  
Aggarwal A., Lohia P., Nagar S., Dey K., Saha D., Black box fairness testing of machine learning models, pp. 625-635, (2019)
[6]  
Biswas S., Rajan H., Fair preprocessing: towards understanding compositional fairness of data transformers in machine learning pipeline, pp. 981-993, (2021)
[7]  
Chakraborty J., Majumder S., Menzies T., Bias in machine learning software: Why? how? what to do?, pp. 429-440, (2021)
[8]  
Hort M., Zhang J.M., Sarro F., Harman M., Fairea: A model behaviour mutation approach to benchmarking bias mitigation methods, pp. 994-1006, (2021)
[9]  
Udeshi S., Arora P., Chattopadhyay S., Automated directed fairness testing, pp. 98-108, (2018)
[10]  
Zhang P., Wang J., Sun J., Dong G., Wang X., Wang X., Dong J.S., Dai T., White-box fairness testing through adversarial sampling, pp. 949-960, (2020)