Finding core labels for maximizing generalization of graph neural networks

被引:1
作者
Fu, Sichao [1 ]
Ma, Xueqi [2 ]
Zhan, Yibing [3 ]
You, Fanyu [4 ]
Peng, Qinmu [1 ]
Liu, Tongliang [5 ]
Bailey, James
Mandic, Danilo [2 ,6 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[2] Univ Melbourne, Sch Comp & Informat Syst, Parkville, Vic 3010, Australia
[3] JD Explore Acad, Beijing 100176, Peoples R China
[4] Univ Southern Calif, Los Angeles, CA 90005 USA
[5] Univ Sydney, Fac Engn, Sch Comp Sci, Trustworthy Machine Learning Lab, Camperdown, NSW 2006, Australia
[6] Imperial Coll London, Dept Elect & Elect Engn, London SW7 2BX, England
基金
中国国家自然科学基金;
关键词
Graph neural networks; Semi-supervised learning; Node classification; Data-centric;
D O I
10.1016/j.neunet.2024.106635
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) have become a popular approach for semi-supervised graph representation learning. GNNs research has generally focused on improving methodological details, whereas less attention has been paid to exploring the importance of labeling the data. However, for semi-supervised learning, the quality of training data is vital. In this paper, we first introduce and elaborate on the problem of training data selection for GNNs. More specifically, focusing on node classification, we aim to select representative nodes from a graph used to train GNNs to achieve the best performance. To solve this problem, we are inspired by the popular lottery ticket hypothesis, typically used for sparse architectures, and we propose the following subset hypothesis for graph data: "There exists a core subset when selecting a fixed-size dataset from the dense training dataset, that can represent the properties of the dataset, and GNNs trained on this core subset can achieve a better graph representation". Equipped with this subset hypothesis, we present an efficient algorithm to identify the core data in the graph for GNNs. Extensive experiments demonstrate that the selected data (as a training set) can obtain performance improvements across various datasets and GNNs architectures.
引用
收藏
页数:12
相关论文
共 73 条
  • [1] [Anonymous], 2017, P BRIT MACH VIS C
  • [2] Bengio Y., 2009, P 26 ANN INT C MACH, P1, DOI DOI 10.1145/1553374.1553380
  • [3] Geometric Deep Learning Going beyond Euclidean data
    Bronstein, Michael M.
    Bruna, Joan
    LeCun, Yann
    Szlam, Arthur
    Vandergheynst, Pierre
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (04) : 18 - 42
  • [4] Cai HY, 2017, Arxiv, DOI arXiv:1705.05085
  • [5] Chen J., 2018, 6 INT C LEARNING REP
  • [6] Chen T., 2020, NEURIPS, p2020b
  • [7] Chen Tianlong, 2021, P MACHINE LEARNING R, V139
  • [8] Webly Supervised Learning of Convolutional Networks
    Chen, Xinlei
    Gupta, Abhinav
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1431 - 1439
  • [9] Evci U, 2022, AAAI CONF ARTIF INTE, P6577
  • [10] Feng W., 2020, Advances in neural information processing systems