FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data

被引:4
作者
Fang, Minghong [1 ]
Liu, Jia [1 ]
Momma, Michinari [2 ]
Sun, Yi [2 ]
机构
[1] Ohio State Univ, Columbus, OH 43210 USA
[2] Amazon, Seattle, WA USA
来源
PROCEEDINGS OF THE 27TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2022 | 2022年
关键词
Fairness; Recommender Systems; Antidote Data;
D O I
10.1145/3532105.3535023
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Today, recommender systems have played an increasingly important role in shaping our experiences of digital environments and social interactions. However, as recommender systems become ubiquitous in our society, recent years have also witnessed significant fairness concerns for recommender systems. Specifically, studies have shown that recommender systems may inherit or even amplify biases from historical data, and as a result, provide unfair recommendations. To address fairness risks in recommender systems, most of the previous approaches to date are focused on modifying either the existing training data samples or the deployed recommender algorithms, but unfortunately with limited degrees of success. In this paper, we propose a new approach called fair recommendation with optimized antidote data (FairRoad), which aims to improve the fairness performances of recommender systems through the construction of a small and carefully crafted antidote dataset. Toward this end, we formulate our antidote data generation task as a mathematical optimization problem, which minimizes the unfairness of the targeted recommender systems while not disrupting the deployed recommendation algorithms. Extensive experiments show that our proposed antidote data generation algorithm significantly improve the fairness of recommender systems with a small amounts of antidote data.
引用
收藏
页码:173 / 184
页数:12
相关论文
共 78 条
[51]   Fighting Fire with Fire: Using Antidote Data to Improve Polarization and Fairness of Recommender Systems [J].
Rastegarpanah, Bashir ;
Gummadi, Krishna P. ;
Crovella, Mark .
PROCEEDINGS OF THE TWELFTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'19), 2019, :231-239
[52]  
Schwarzschild A, 2021, PR MACH LEARN RES, V139
[53]  
Shafahi A, 2018, ADV NEUR IN, V31
[54]   FaiR-N: Fair and Robust Neural Networks for Structured Data [J].
Sharma, Shubham ;
Gee, Alan H. ;
Paydarfar, David ;
Ghosh, Joydeep .
AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, :946-955
[55]  
Solans D, 2020, Arxiv, DOI arXiv:2004.07401
[56]   A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices [J].
Speicher, Till ;
Heidari, Hoda ;
Grgic-Hlaca, Nina ;
Gummadi, Krishna P. ;
Singla, Adish ;
Weller, Adrian ;
Zafar, Muhammad Bilal .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :2239-2248
[57]   Calibrated Recommendations [J].
Steck, Harald .
12TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS), 2018, :154-162
[58]  
van der Maaten L, 2008, J MACH LEARN RES, V9, P2579
[59]   Addressing Marketing Bias in Product Recommendations [J].
Wan, Mengting ;
Ni, Jianmo ;
Misra, Rishabh ;
McAuley, Julian .
PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, :618-626
[60]   Fair Classification with Group-Dependent Label Noise [J].
Wang, Jialu ;
Liu, Yang ;
Levy, Caleb .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :526-536