Graph Barlow Twins: A self-supervised representation learning framework for graphs

被引:63
作者
Bielak, Piotr [1 ]
Kajdanowicz, Tomasz [1 ]
Chawla, Nitesh V. [2 ]
机构
[1] Wroclaw Univ Sci & Technol, Dept Artificial Intelligence, Wroclaw, Poland
[2] Univ Notre Dame, Dept Comp Sci & Engn, Notre Dame, IN USA
关键词
Representation learning; Self-supervised learning; Graph embedding;
D O I
10.1016/j.knosys.2022.109631
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The self-supervised learning (SSL) paradigm is an essential exploration area, which tries to eliminate the need for expensive data labeling. Despite the great success of SSL methods in computer vision and natural language processing, most of them employ contrastive learning objectives that require negative samples, which are hard to define. This becomes even more challenging in the case of graphs and is a bottleneck for achieving robust representations. To overcome such limitations, we propose a framework for self-supervised graph representation learning - Graph Barlow Twins, which utilizes a cross-correlation-based loss function instead of negative samples. Moreover, it does not rely on non-symmetric neural network architectures - in contrast to state-of-the-art self-supervised graph representation learning method BGRL. We show that our method achieves as competitive results as the best self-supervised methods and fully supervised ones while requiring fewer hyperparameters and substantially shorter computation time (ca. 30 times faster than BGRL). (c) The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页数:12
相关论文
共 31 条
[1]  
Balle J., 2017, P INT C LEARN REPR I, P1
[2]  
Che FH, 2020, Arxiv, DOI [arXiv:2011.05126, 10.48550/arXiv.2011.05126]
[3]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753
[4]   Non-linear feature extraction by redundancy reduction in an unsupervised stochastic neural network [J].
Deco, G ;
Parra, L .
NEURAL NETWORKS, 1997, 10 (04) :683-691
[5]  
Fey Matthias, 2019, ICLR WORKSHOP REPRES
[6]   node2vec: Scalable Feature Learning for Networks [J].
Grover, Aditya ;
Leskovec, Jure .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :855-864
[7]  
Gugger S., 2018, Adamw and super-convergence is now the fastest way to train neural nets
[8]  
Hamilton WL, 2017, ADV NEUR IN, V30
[9]  
Hu Weihua, 2020, Advances in Neural Information Processing Systems, V33
[10]  
Tsai YHH, 2021, Arxiv, DOI arXiv:2104.13712