Enhancing graph convolutional networks with progressive granular ball sampling fusion: A novel approach to efficient and accurate GCN training

被引:2
作者
Cong, Hui [1 ]
Sun, Qiguo [1 ]
Yang, Xibei [1 ]
Liu, Keyu [1 ]
Qian, Yuhua [2 ]
机构
[1] Jiangsu Univ Sci & Technol, Sch Comp, Zhenjiang 212100, Jiangsu, Peoples R China
[2] Shanxi Univ, Inst Big Data Sci & Ind, Taiyuan 030006, Shanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Granular-ball sampling; Graph convolutional networks; Incremental training; Node classification; Semi-supervised learning; CLASSIFICATION;
D O I
10.1016/j.ins.2024.120831
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph convolutional network (GCN) has gained considerable attention and has been widely utilized in graph data analytics. However, training large GCNs presents considerable challenges owing to the inherent complexity of graph -structured data. Previous training algorithms frequently struggle with slow convergence speed caused by full -batch gradient descent on entire graphs and reduced model performance due to inappropriate node sampling methods. To address these issues, we propose a novel framework called Progressive Granular Ball Sampling Fusion (PGBSF). PGBSF leverages granular ball sampling to partition the original graph into a collection of subgraphs, thereby enhancing both training efficiency and detail capture. Then, it applies a progressive approach accompanied by a parameter -sharing strategy for incremental GCN model training, which results in robust performance and rapid convergence speed. This simple yet effective strategy considerably enhances classification accuracy and memory efficiency. The experiment results show that our proposed architecture consistently outperforms other baseline models in terms of accuracy across almost all datasets with different label rates. In addition, PGBSF improves GCN performance significantly on large and complex datasets. Moreover, GCN+PGBSF reduces time complexity by training on subgraphs and achieves the fastest convergence speed among all models, with a relatively small variance in loss during training.
引用
收藏
页数:14
相关论文
共 47 条
  • [11] ITSM-GCN: Informative Training Sample Mining for Graph Convolutional Network-based Collaborative Filtering
    Gong, Kaiqi
    Song, Xiao
    Wang, Senzhang
    Liu, Songsong
    Li, Yong
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 614 - 623
  • [12] Perturbation-augmented Graph Convolutional Networks: A Graph Contrastive Learning architecture for effective node classification tasks
    Guo, Qihang
    Yang, Xibei
    Zhang, Fengjun
    Xu, Taihua
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 129
  • [13] Hamilton WL, 2017, ADV NEUR IN, V30
  • [14] High-order graph attention network
    He, Liancheng
    Bai, Liang
    Yang, Xian
    Du, Hangyuan
    Liang, Jiye
    [J]. INFORMATION SCIENCES, 2023, 630 : 222 - 234
  • [15] Revisiting Graph Neural Networks: Graph Filtering Perspective
    Hoang, N. T.
    Maehara, Takanori
    Murata, Tsuyoshi
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 8376 - 8383
  • [16] FuS-GCN: Efficient B-rep based graph convolutional networks for 3D-CAD model classification and retrieval
    Hou, Junhao
    Luo, Chenqi
    Qin, Feiwei
    Shao, Yanli
    Chen, Xiaxuan
    [J]. ADVANCED ENGINEERING INFORMATICS, 2023, 56
  • [17] Huang Wenbing, 2018, Advances in Neural Information Processing Systems, V31
  • [18] Extended rough sets model based on fuzzy granular ball and its attribute reduction
    Ji, Xia
    Peng, JianHua
    Zhao, Peng
    Yao, Sheng
    [J]. INFORMATION SCIENCES, 2023, 640
  • [19] INS-GNN: Improving graph imbalance learning with self-supervision
    Juan, Xin
    Zhou, Fengfeng
    Wang, Wentao
    Jin, Wei
    Tang, Jiliang
    Wang, Xin
    [J]. INFORMATION SCIENCES, 2023, 637
  • [20] Kipf T. N., 2017, P INT C LEARN REPR, P1