On the Power of Graph Neural Networks and Feature Augmentation Strategies to Classify Social Networks

被引:0
作者
Guettala, Walid [1 ]
Gulyas, Laszlo [1 ]
机构
[1] Eotvos Lorand Univ, Dept Artificial Intelligence, Budapest, Hungary
来源
INTELLIGENT INFORMATION AND DATABASE SYSTEMS, PT II, ACIIDS 2024 | 2024年 / 14796卷
关键词
Graph Neural Networks; Benchmark; Graph Classification; Feature Augmentation; Social Networks;
D O I
10.1007/978-981-97-4985-0_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper studies four Graph Neural Network architectures (GNNs) for a graph classification task on a synthetic dataset created using classic generative models of Network Science. Since the synthetic networks do not contain (node or edge) features, five different augmentation strategies (artificial feature types) are applied to nodes. All combinations of the 4 GNNs (GCN with Hierarchical and Global aggregation, GIN and GATv2) and the 5 feature types (constant 1, noise, degree, normalized degree and ID - a vector of the number of cycles of various lengths) are studied and their performances compared as a function of the hidden dimension of artificial neural networks used in the GNNs. The generalisation ability of these models is also analysed using a second synthetic network dataset (containing networks of different sizes). Our results point towards the balanced importance of the computational power of the GNN architecture and the information level provided by the artificial features. GNN architectures with higher computational power, like GIN and GATv2, perform well for most augmentation strategies. On the other hand, artificial features with higher information content, like ID or degree, not only consistently outperform other augmentation strategies, but can also help GNN architectures with lower computational power to achieve good performance.
引用
收藏
页码:287 / 301
页数:15
相关论文
共 22 条
[1]   Statistical mechanics of complex networks [J].
Albert, R ;
Barabási, AL .
REVIEWS OF MODERN PHYSICS, 2002, 74 (01) :47-97
[2]  
Bojchevski A, 2018, Arxiv, DOI arXiv:1707.03815
[3]  
Brody S., 2022, INT C LEARN REPR
[4]  
Cai C., 2023, Local-to-Global Perspectives on Graph Neural Networks
[5]  
Duvenaudt D, 2015, ADV NEUR IN, V28
[6]  
Erdos Paul, 1959, Publicationes Mathematicae Debrecen, V6, P290, DOI DOI 10.5486/PMD.1959.6.3-4.12
[7]  
Errica F., 2020, INT C LEARN REPR ICL
[8]  
Gao HY, 2019, PR MACH LEARN RES, V97
[9]  
Hamilton WL, 2017, ADV NEUR IN, V30
[10]  
Huang N. T., 2022, NEURIPS 2022 WORKSH