Iterative Graph Self-Distillation

被引:2
|
作者
Zhang, Hanlin [1 ]
Lin, Shuai [2 ]
Liu, Weiyang [3 ]
Zhou, Pan [4 ]
Tang, Jian [5 ]
Liang, Xiaodan [2 ]
Xing, Eric P. [6 ]
机构
[1] Carnegie Mellon Univ, Machine Learning Dept, Pittsburgh, PA 15213 USA
[2] Sun Yat Sen Univ, Sch Intelligent Syst Engn, Guangzhou 510275, Guangdong, Peoples R China
[3] Univ Cambridge, Dept Comp Sci, Cambridge CB2 1TN, England
[4] SEA Grp Ltd, SEA AI Lab, Singapore 138680, Singapore
[5] HEC Montreal, Montreal, PQ H3T 2A7, Canada
[6] Carnegie Mellon Univ, Dept Comp Sci, Pittsburgh, PA 15213 USA
关键词
Task analysis; Representation learning; Kernel; Graph neural networks; Iterative methods; Data augmentation; Training; graph representation learning; self-supervised learning;
D O I
10.1109/TKDE.2023.3303885
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a method called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach. IGSD involves a teacher-student distillation process that uses graph diffusion augmentations and constructs the teacher model using an exponential moving average of the student model. The intuition behind IGSD is to predict the teacher network representation of the graph pairs under different augmented views. As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and self-supervised contrastive loss. Finally, we show that fine-tuning the IGSD-trained models with self-training can further improve graph representation learning. Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD.
引用
收藏
页码:1161 / 1169
页数:9
相关论文
共 50 条
  • [31] Variational Self-Distillation for Remote Sensing Scene Classification
    Hu, Yutao
    Huang, Xin
    Luo, Xiaoyan
    Han, Jungong
    Cao, Xianbin
    Zhang, Jun
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [32] Multi-exit self-distillation with appropriate teachers
    Sun, Wujie
    Chen, Defang
    Wang, Can
    Ye, Deshi
    Feng, Yan
    Chen, Chun
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2024, 25 (04) : 585 - 599
  • [33] Self-Distillation as Instance-Specific Label Smoothing
    Zhang, Zhilu
    Sabuncu, Mert R.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [34] SIMPLE SELF-DISTILLATION LEARNING FOR NOISY IMAGE CLASSIFICATION
    Sasaya, Tenta
    Watanabe, Takashi
    Ida, Takashi
    Ono, Toshiyuki
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 795 - 799
  • [35] Self-Distillation via Intra-Class Compactness
    Lin, Jiaye
    Li, Lin
    Yu, Baosheng
    Ou, Weihua
    Gou, Jianping
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT 1, 2025, 15031 : 139 - 151
  • [36] A Self-distillation Lightweight Image Classification Network Scheme
    Ni S.
    Ma X.
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2023, 46 (06): : 66 - 71
  • [37] A dynamic dropout self-distillation method for object segmentation
    Chen, Lei
    Cao, Tieyong
    Zheng, Yunfei
    Wang, Yang
    Zhang, Bo
    Yang, Jibin
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (01)
  • [38] Generalization Self-distillation with Epoch-wise Regularization
    Xia, Yuelong
    Yang, Yun
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [39] Self-Distillation: Towards Efficient and Compact Neural Networks
    Zhang, Linfeng
    Bao, Chenglong
    Ma, Kaisheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (08) : 4388 - 4403
  • [40] Few-shot Learning with Online Self-Distillation
    Liu, Sihan
    Wang, Yue
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 1067 - 1070