Learning Strong Graph Neural Networks with Weak Information

被引:25
作者
Liu, Yixin [1 ]
Ding, Kaize [2 ]
Wang, Jianling [3 ]
Lee, Vincent [1 ]
Liu, Huan [2 ]
Pan, Shirui [4 ]
机构
[1] Monash Univ, Clayton, Vic, Australia
[2] Arizona State Univ, Tempe, AZ 85287 USA
[3] Texas A&M Univ, College Stn, TX 77843 USA
[4] Griffith Univ, Nathan, Qld, Australia
来源
PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023 | 2023年
关键词
Graph Neural Networks; Missing Data; Few-Label Learning; CONVOLUTIONAL NETWORKS;
D O I
10.1145/3580305.3599410
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Networks (GNNs) have exhibited impressive performance in many graph learning tasks. Nevertheless, the performance of GNNs can deteriorate when the input graph data suffer from weak information, i.e., incomplete structure, incomplete features, and insufficient labels. Most prior studies, which attempt to learn from the graph data with a specific type of weak information, are far from effective in dealing with the scenario where diverse data deficiencies exist and mutually affect each other. To fill the gap, in this paper, we aim to develop an effective and principled approach to the problem of graph learning with weak information (GLWI). Based on the findings from our empirical analysis, we derive two design focal points for solving the problem of GLWI, i.e., enabling long-range propagation in GNNs and allowing information propagation to those stray nodes isolated from the largest connected component. Accordingly, we propose (DPT)-P-2, a dual-channel GNN framework that performs long-range information propagation not only on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities. We further develop a prototype contrastive alignment algorithm that aligns the class-level prototypes learned from two channels, such that the two different information propagation processes can mutually benefit from each other and the finally learned model can well handle the GLWI problem. Extensive experiments on eight real-world benchmark datasets demonstrate the effectiveness and efficiency of our proposed methods in various GLWI scenarios.
引用
收藏
页码:1559 / 1571
页数:13
相关论文
共 71 条
[1]  
[Anonymous], 2020, ADV NEUR IN
[2]   Scaling Graph Neural Networks with Approximate PageRank [J].
Bojchevski, Aleksandar ;
Klicpera, Johannes ;
Perozzi, Bryan ;
Kapoor, Amol ;
Blais, Martin ;
Rozemberczki, Benedek ;
Lukasik, Michal ;
Guennemann, Stephan .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :2464-2473
[3]  
Chen Ming, 2020, P MACHINE LEARNING R, V119
[4]  
Chen Y., 2020, Advances in Neural Information Processing Systems, V33, P18194
[5]   Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks [J].
Chiang, Wei-Lin ;
Liu, Xuanqing ;
Si, Si ;
Li, Yang ;
Bengio, Samy ;
Hsieh, Cho-Jui .
KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, :257-266
[6]  
Chien E., 2021, INT C LEARNING REPRE
[7]  
Ding Kaize, 2022, SIGKDD
[8]  
Ding Kaize, 2022, ACM SIGKDD Explorations
[9]  
Ding Kaize., 2022, AAAI
[10]  
Ding Kaize, 2023, AAAI