Robust graph contrastive learning with multi-hop views for node classification

被引:0
作者
Wang, Yutong [1 ]
Zhang, Junheng [1 ]
Cao, Ren [1 ]
Zou, Minhao [1 ]
Guan, Chun [1 ]
Leng, Siyang [1 ,2 ]
机构
[1] Fudan Univ, Inst AI & Robot, Acad Engn & Technol, Shanghai 200433, Peoples R China
[2] Fudan Univ, Res Inst Intelligent Complex Syst, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Self-supervised learning; Graph contrastive learning; Multi-hop; Node classification;
D O I
10.1016/j.asoc.2025.112783
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contrastive learning has been proven successful in graph self-supervised learning by addressing label scarcity in real-world applications. Most existing methods fail to effectively incorporate multi-hop information into the augmented views but only focus on augmenting the raw input data, which incurs massive computational and memory costs. To address such limitations, we propose a novel approach, Multi-hop Views Graph Contrastive Learning (MHVGCL), that enhances the node classification performance for graphs. Specifically, in contrast to existing methods that generate multiple augmented heads via neural networks, our approach generates these heads by exploiting multi-hop information, which is obtained iteratively from a single output head. This technique can extract more comprehensive structural information without destroying the graph topology. We further devise a multi-hop contrastive loss function to maximize agreements among the multi-hop views of the same node, while minimizing them among different nodes. This design contributes to more robust representation learning that keeps structural attributes invariant under different augmented views. Numerical experimental results on a variety of benchmarks demonstrate the significant superiority of our approach over other advanced methods, by learning more discriminative node representations even with extremely limited labels.
引用
收藏
页数:12
相关论文
共 39 条
[11]   Label Efficient Semi-Supervised Learning via Graph Filtering [J].
Li, Qimai ;
Wu, Xiao-Ming ;
Liu, Han ;
Zhang, Xiaotong ;
Guan, Zhichao .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :9574-9583
[12]  
Li QM, 2018, AAAI CONF ARTIF INTE, P3538
[13]   Towards Deeper Graph Neural Networks [J].
Liu, Meng ;
Gao, Hongyang ;
Ji, Shuiwang .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :338-348
[14]  
Mo YJ, 2022, AAAI CONF ARTIF INTE, P7797
[15]  
Pei H, 2020, ARXIV200205287
[16]   Graph Representation Learning via Graphical Mutual Information Maximization [J].
Peng, Zhen ;
Huang, Wenbing ;
Luo, Minnan ;
Zheng, Qinghua ;
Rong, Yu ;
Xu, Tingyang ;
Huang, Junzhou .
WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, :259-270
[17]   Machine-learning methods for estimating compressive strength of high-performance alkali-activated concrete [J].
Shafighfard, Torkan ;
Kazemi, Farzin ;
Asgarkhani, Neda ;
Yoo, Doo-Yeol .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 136
[18]   Chained machine learning model for predicting load capacity and ductility of steel fiber-reinforced concrete beams [J].
Shafighfard, Torkan ;
Kazemi, Farzin ;
Bagherzadeh, Faramarz ;
Mieloszyk, Magdalena ;
Yoo, Doo-Yeol .
COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2024, 39 (23) :3573-3594
[19]  
Shchur O, 2019, Arxiv, DOI arXiv:1811.05868
[20]  
Shen X, 2023, AAAI CONF ARTIF INTE, P9782