Algorithmic fairness in social context

被引:0
作者
Huang Y. [1 ]
Liu W. [1 ]
Gao W. [2 ]
Lu X. [1 ]
Liang X. [1 ]
Yang Z. [2 ]
Li H. [2 ]
Ma L. [1 ]
Tang S. [1 ]
机构
[1] Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, No. 15 Yucai Road, Qixing District, Guangxi, Guilin
[2] Research Center for Advanced Computer Systems, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road, Haidian District, Beijing
来源
BenchCouncil Transactions on Benchmarks, Standards and Evaluations | 2023年 / 3卷 / 03期
基金
中国国家自然科学基金;
关键词
Bias; Discrimination; Fairness algorithms; Fairness datasets; Fairness measure; Social fairness;
D O I
10.1016/j.tbench.2023.100137
中图分类号
学科分类号
摘要
Algorithmic fairness research is currently receiving significant attention, aiming to ensure that algorithms do not discriminate between different groups or individuals with similar characteristics. However, with the popularization of algorithms in all aspects of society, algorithms have changed from mere instruments to social infrastructure. For instance, facial recognition algorithms are widely used to provide user verification services and have become an indispensable part of many social infrastructures like transportation, health care, etc. As an instrument, an algorithm needs to pay attention to the fairness of its behavior. However, as a social infrastructure, it needs to pay even more attention to its impact on social fairness. Otherwise, it may exacerbate existing inequities or create new ones. For example, if an algorithm treats all passengers equally and eliminates special seats for pregnant women in the interest of fairness, it will increase the risk of pregnant women taking public transport and indirectly damage their right to fair travel. Therefore, algorithms have the responsibility to ensure social fairness, not just within their operations. It is now time to expand the concept of algorithmic fairness beyond mere behavioral equity, assessing algorithms in a broader societal context, and examining whether they uphold and promote social fairness. This article analyzes the current status and challenges of algorithmic fairness from three key perspectives: fairness definition, fairness dataset, and fairness algorithm. Furthermore, the potential directions and strategies to promote the fairness of the algorithm are proposed. © 2023 The Authors
引用
收藏
相关论文
共 85 条
[61]  
(2000)
[62]  
Merler M., Ratha N., Feris R.S., Smith J.R., Diversity in faces, (2019)
[63]  
Yeh I.-C., Lien C.-H., The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients, Expert Syst. Appl., 36, 2, pp. 2473-2480, (2009)
[64]  
Moro S., Cortez P., Rita P., A data-driven approach to predict the success of bank telemarketing, Decis. Support Syst., 62, pp. 22-31, (2014)
[65]  
Angwin J., Larson J., Mattu S., Kirchner L., Machine bias, Ethics of Data and Analytics, pp. 254-264, (2016)
[66]  
du Pin Calmon F., Wei D., Vinzamuri B., Ramamurthy K.N., Varshney K.R., Data pre-processing for discrimination prevention: Information-theoretic optimization and analysis, IEEE J. Sel. Top. Sign. Proces., 12, 5, pp. 1106-1119, (2018)
[67]  
Krasanakis E., Spyromitros-Xioufis E., Papadopoulos S., Kompatsiaris Y., Adaptive sensitive reweighting to mitigate bias in fairness-aware classification, pp. 853-862, (2018)
[68]  
Khademi A., Lee S., Foley D., Honavar V., Fairness in algorithmic decision making: An excursion through the lens of causality, pp. 2907-2914, (2019)
[69]  
Feng R., Yang Y., Lyu Y., Tan C., Sun Y., Wang C., Learning fair representations via an adversarial framework, (2019)
[70]  
Wu X., Xu D., Yuan S., Zhang L., Fair data generation and machine learning through generative adversarial networks, Generative Adversarial Learning: Architectures and Applications, pp. 31-55, (2022)