BA-GNN: On Learning Bias-Aware Graph Neural Network

被引:26
作者
Chen, Zhengyu [1 ,2 ]
Xiao, Teng [3 ]
Kuang, Kun [1 ,4 ,5 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[2] Alibaba Zhejiang Univ Joint Res Inst Frontier Tec, Hangzhou, Peoples R China
[3] Penn State Univ, University Pk, PA 16802 USA
[4] Zhejiang Univ, Shanghai Inst Adv Study, Shanghai, Peoples R China
[5] Shanghai AI Lab, Shanghai, Peoples R China
来源
2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022) | 2022年
基金
中国国家自然科学基金;
关键词
Graph Neural Network; Distribution Shift;
D O I
10.1109/ICDE53745.2022.00271
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) show promising results for semi-supervised learning tasks on graphs, which become favorable comparing with other approaches. However, similar to other machine learning models, GNNs might suffer from the bias issue because of the distribution shift between training and testing node distributions. More importantly, the test node distribution in the graph is generally unknown during model training in practice. In this paper, we focus on how to address the bias issue on graphs and learn a graph neural network model that is robust to arbitrary unknown distribution shifts. To address this problem, we propose a novel Bias-Aware Graph Neural Network (BA-GNN) framework by learning node representations that are invariant across different distributions for invariant prediction. Specifically, our BA-GNN framework contains two interactive parts, one for bias identification and the other for invariant prediction. To learn invariant feature and aggregated representation, our BA-GNN learns multiple biased graph partitions and selects feature, neighbor, and propagation steps for nodes under multiple biased graph partitions. Extensive experiments show that our proposed BA-GNN framework can significantly improve different GNNs backbones such as GCN, GAT, APPNP and GraphSAGE on different datasets.
引用
收藏
页码:3012 / 3024
页数:13
相关论文
共 72 条
[1]  
[Anonymous], MM 20
[2]  
[Anonymous], CIKM 20 P 29 ACM
[3]  
Arjovsky Martin, 2019, arXiv preprint arXiv:1907.02893
[4]  
Atwood J, 2016, ADV NEUR IN, V29
[5]  
Bevilacqua B, 2021, PR MACH LEARN RES, V139
[6]  
Boski M, 2017, 2017 10TH INTERNATIONAL WORKSHOP ON MULTIDIMENSIONAL (ND) SYSTEMS (NDS)
[7]  
Bruna J., 2014, 2 ICLR 2014
[8]  
Chang S, 2020, P 37 INT C MACH LEAR, P1448
[9]  
Chen Deli, 2021, Advances in Neural Information Processing Systems, V34
[10]  
Chen Ming, 2020, P MACHINE LEARNING R, V119