FairUP: A Framework for Fairness Analysis of Graph Neural Network-Based User Profiling Models

被引:6
作者
Abdelrazek, Mohamed [1 ]
Purificato, Erasmo [1 ,2 ]
Boratto, Ludovico [3 ]
De Luca, Ernesto William [1 ,2 ]
机构
[1] Otto von Guericke Univ, Magdeburg, Germany
[2] Georg Eckert Inst, Leibniz Inst Educ Media, Braunschweig, Germany
[3] Univ Cagliari, Dept Math & Comp Sci, Cagliari, Italy
来源
PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023 | 2023年
关键词
User Profiling; Graph Neural Networks; Algorithmic Fairness;
D O I
10.1145/3539618.3591814
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Modern user profiling approaches capture different forms of interactions with the data, from user-item to user-user relationships. Graph Neural Networks (GNNs) have become a natural way to model these behaviours and build efficient and effective user profiles. However, each GNN-based user profiling approach has its own way of processing information, thus creating heterogeneity that does not favour the benchmarking of these techniques. To overcome this issue, we present FairUP, a framework that standardises the input needed to run three state-of-the-art GNN-based models for user profiling tasks. Moreover, given the importance that algorithmic fairness is getting in the evaluation of machine learning systems, FairUP includes two additional components to (1) analyse pre-processing and post-processing fairness and (2) mitigate the potential presence of unfairness in the original datasets through three pre-processing debiasing techniques. The framework, while extensible in multiple directions, in its first version, allows the user to conduct experiments on four real-world datasets. The source code is available at https://link.erasmopurif.com/FairUP-source- code, and the web application is available at https://link.erasmopurif.com/FairUP.
引用
收藏
页码:3165 / 3169
页数:5
相关论文
共 35 条
  • [1] Barocas S., 2019, Fairness and Machine Learning: Limitations and Opportunities.
  • [2] Fairness in Criminal Justice Risk Assessments: The State of the Art
    Berk, Richard
    Heidari, Hoda
    Jabbari, Shahin
    Kearns, Michael
    Roth, Aaron
    [J]. SOCIOLOGICAL METHODS & RESEARCH, 2021, 50 (01) : 3 - 44
  • [3] Biddle D., 2017, Adverse impact and test validation: a practitioner's guide to valid and defensible employment testing
  • [4] Caton S., 2020, Fairness in machine learning: A survey
  • [5] Chen WJ, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2116
  • [6] Chen Weijian, 2021, IEEE T KNOWLEDGE DAT
  • [7] Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information
    Dai, Enyan
    Wang, Suhang
    [J]. WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2021, : 680 - 688
  • [8] EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks
    Dong, Yushun
    Liu, Ninghao
    Jalaian, Brian
    Li, Jundong
    [J]. PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 1259 - 1269
  • [9] Individual Fairness for Graph Neural Networks: A Ranking based Approach
    Dong, Yushun
    Kang, Jian
    Tong, Hanghang
    Li, Jundong
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 300 - 310
  • [10] Dwork Cynthia, 2012, THEO COMP SCI, P214