Fairness-aware Prompt Tuning for Graph Neural Networks

被引:1
作者
Li, Zhengpin [1 ]
Lin, Minhua [2 ]
Wang, Jian [1 ,3 ]
Wang, Suhang [2 ]
机构
[1] Fudan Univ, Shanghai, Peoples R China
[2] Penn State Univ, State Coll, PA USA
[3] Shanghai Key Lab Intelligent Informat Proc, Shanghai, Peoples R China
来源
PROCEEDINGS OF THE ACM WEB CONFERENCE 2025, WWW 2025 | 2025年
基金
国家重点研发计划;
关键词
Graph Neural Networks; Fairness; Graph Prompt;
D O I
10.1145/3696410.3714780
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Graph prompt tuning has achieved significant success for its ability to effectively adapt pre-trained graph neural networks to various downstream tasks. However, the pre-trained models may learn discriminatory representation due to the inherent prejudice in graph-structured data. Existing graph prompt tuning overlooks such unfairness, leading to biased outputs towards certain demographic groups determined by sensitive attributes such as gender, age, and political ideology. To overcome this limitation, we propose a fairness-aware graph prompt tuning method to promote fairness while enhancing the generality of any pre-trained GNNs (named FPrompt). FPrompt introduces hybrid graph prompts to augment counterfactual data while aligning the pre-training and downstream tasks. It also applies edge modification to increase sensitivity heterophily. We provide a two-fold theoretical analysis: first, we demonstrate that FPrompt possesses universal capabilities in handling pre-trained GNN models across various pre-training strategies, ensuring its adaptability in different scenarios. Second, we show that FPrompt effectively reduces the upper bound of generalized statistical parity, thereby mitigating the bias of pre-trained models. Extensive experiments demonstrate that FPrompt outperforms baseline models in both accuracy and fairness (similar to 33%) on benchmark datasets. Additionally, we introduce a new benchmark for transferable evaluation, showing that FPrompt achieves state-of-the-art generalization performance.
引用
收藏
页码:3586 / 3597
页数:12
相关论文
共 50 条
[1]  
2023, Arxiv, DOI arXiv:2303.08774
[2]  
Agarwal C, 2021, PR MACH LEARN RES, V161, P2114
[3]  
Bruna J., 2014, INT C LEARNING REPRE
[4]  
Chen J., 2024, 12 INT C LEARN REPR
[5]   A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability [J].
Dai, Enyan ;
Zhao, Tianxiang ;
Zhu, Huaisheng ;
Xu, Junjie ;
Guo, Zhimeng ;
Liu, Hui ;
Tang, Jiliang ;
Wang, Suhang .
MACHINE INTELLIGENCE RESEARCH, 2024, 21 (06) :1011-1061
[6]  
Dai EY, 2024, Arxiv, DOI arXiv:2402.04435
[7]   Learning Fair Graph Neural Networks With Limited and Private Sensitive Attribute Information [J].
Dai, Enyan ;
Wang, Suhang .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) :7103-7117
[8]   Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information [J].
Dai, Enyan ;
Wang, Suhang .
WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2021, :680-688
[9]   Fairness in Graph Mining: A Survey [J].
Dong, Yushun ;
Ma, Jing ;
Wang, Song ;
Chen, Chen ;
Li, Jundong .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (10) :10583-10602
[10]   EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks [J].
Dong, Yushun ;
Liu, Ninghao ;
Jalaian, Brian ;
Li, Jundong .
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, :1259-1269