Finding core labels for maximizing generalization of graph neural networks

被引:1
作者
Fu, Sichao [1 ]
Ma, Xueqi [2 ]
Zhan, Yibing [3 ]
You, Fanyu [4 ]
Peng, Qinmu [1 ]
Liu, Tongliang [5 ]
Bailey, James
Mandic, Danilo [2 ,6 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[2] Univ Melbourne, Sch Comp & Informat Syst, Parkville, Vic 3010, Australia
[3] JD Explore Acad, Beijing 100176, Peoples R China
[4] Univ Southern Calif, Los Angeles, CA 90005 USA
[5] Univ Sydney, Fac Engn, Sch Comp Sci, Trustworthy Machine Learning Lab, Camperdown, NSW 2006, Australia
[6] Imperial Coll London, Dept Elect & Elect Engn, London SW7 2BX, England
基金
中国国家自然科学基金;
关键词
Graph neural networks; Semi-supervised learning; Node classification; Data-centric;
D O I
10.1016/j.neunet.2024.106635
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) have become a popular approach for semi-supervised graph representation learning. GNNs research has generally focused on improving methodological details, whereas less attention has been paid to exploring the importance of labeling the data. However, for semi-supervised learning, the quality of training data is vital. In this paper, we first introduce and elaborate on the problem of training data selection for GNNs. More specifically, focusing on node classification, we aim to select representative nodes from a graph used to train GNNs to achieve the best performance. To solve this problem, we are inspired by the popular lottery ticket hypothesis, typically used for sparse architectures, and we propose the following subset hypothesis for graph data: "There exists a core subset when selecting a fixed-size dataset from the dense training dataset, that can represent the properties of the dataset, and GNNs trained on this core subset can achieve a better graph representation". Equipped with this subset hypothesis, we present an efficient algorithm to identify the core data in the graph for GNNs. Extensive experiments demonstrate that the selected data (as a training set) can obtain performance improvements across various datasets and GNNs architectures.
引用
收藏
页数:12
相关论文
共 73 条
  • [11] Frankle J., 2019, P INT C LEARN REPR
  • [12] Frankle J, 2019, 25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019)
  • [13] Fu S., 2024, IEEE Transactions on Neural Networks and Learning Systems
  • [14] Towards Unsupervised Graph Completion Learning on Graphs with Features and Structure Missing
    Fu, Sichao
    Peng, Qinmu
    He, Yang
    Du, Baokun
    You, Xinge
    [J]. 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023, 2023, : 1019 - 1024
  • [15] Few-Shot Learning With Dynamic Graph Structure Preserving
    Fu, Sichao
    Cao, Qiong
    Lei, Yunwen
    Zhong, Yujie
    Zhan, Yibing
    You, Xinge
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) : 3306 - 3315
  • [16] Active Semi-Supervised Learning Using Sampling Theory for Graph Signals
    Gadde, Akshay
    Anis, Aamir
    Ortega, Antonio
    [J]. PROCEEDINGS OF THE 20TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'14), 2014, : 492 - 501
  • [17] Gao L, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2142
  • [18] Gilmer J, 2017, PR MACH LEARN RES, V70
  • [19] Hamilton WL, 2017, ADV NEUR IN, V30
  • [20] Theory of Disagreement-Based Active Learning
    Hanneke, Steve
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2014, 7 (2-3): : 131 - 309