FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data

被引:4
作者
Fang, Minghong [1 ]
Liu, Jia [1 ]
Momma, Michinari [2 ]
Sun, Yi [2 ]
机构
[1] Ohio State Univ, Columbus, OH 43210 USA
[2] Amazon, Seattle, WA USA
来源
PROCEEDINGS OF THE 27TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2022 | 2022年
关键词
Fairness; Recommender Systems; Antidote Data;
D O I
10.1145/3532105.3535023
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Today, recommender systems have played an increasingly important role in shaping our experiences of digital environments and social interactions. However, as recommender systems become ubiquitous in our society, recent years have also witnessed significant fairness concerns for recommender systems. Specifically, studies have shown that recommender systems may inherit or even amplify biases from historical data, and as a result, provide unfair recommendations. To address fairness risks in recommender systems, most of the previous approaches to date are focused on modifying either the existing training data samples or the deployed recommender algorithms, but unfortunately with limited degrees of success. In this paper, we propose a new approach called fair recommendation with optimized antidote data (FairRoad), which aims to improve the fairness performances of recommender systems through the construction of a small and carefully crafted antidote dataset. Toward this end, we formulate our antidote data generation task as a mathematical optimization problem, which minimizes the unfairness of the targeted recommender systems while not disrupting the deployed recommendation algorithms. Extensive experiments show that our proposed antidote data generation algorithm significantly improve the fairness of recommender systems with a small amounts of antidote data.
引用
收藏
页码:173 / 184
页数:12
相关论文
共 78 条
[1]  
Alfeld S, 2016, AAAI CONF ARTIF INTE, P1452
[2]  
Ali M, 2019, Arxiv, DOI arXiv:1904.02095
[3]   Fairness in Criminal Justice Risk Assessments: The State of the Art [J].
Berk, Richard ;
Heidari, Hoda ;
Jabbari, Shahin ;
Kearns, Michael ;
Roth, Aaron .
SOCIOLOGICAL METHODS & RESEARCH, 2021, 50 (01) :3-44
[4]   Fairness in Recommendation Ranking through Pairwise Comparisons [J].
Beutel, Alex ;
Chen, Jilin ;
Doshi, Tulsee ;
Qian, Hai ;
Wei, Li ;
Wu, Yi ;
Heldt, Lukasz ;
Zhao, Zhe ;
Hong, Lichan ;
Chi, Ed H. ;
Goodrow, Cristos .
KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, :2212-2220
[5]  
Biggio B., 2012, ICML, V2, P1807, DOI [10.48550/arxiv.1206.6389, 10.48550/arXiv.1206.6389, DOI 10.48550/ARXIV.1206.6389]
[6]   On the Apparent Conflict Between Individual and Group Fairness [J].
Binns, Reuben .
FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, :514-524
[7]   Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias? An Empirical Study on Model Fairness [J].
Biswas, Sumon ;
Rajan, Hridesh .
PROCEEDINGS OF THE 28TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '20), 2020, :642-653
[8]  
Bower A, 2021, Arxiv, DOI arXiv:2103.11023
[9]  
Burke Robin, 2018, Conference on Fairness, Accountability and Transparency, P202, DOI DOI 10.18122/B2GQ53
[10]   Three naive Bayes approaches for discrimination-free classification [J].
Calders, Toon ;
Verwer, Sicco .
DATA MINING AND KNOWLEDGE DISCOVERY, 2010, 21 (02) :277-292