Generalized Laplacian Eigenmaps

被引:0
作者
Zhu, Hao [1 ]
Koniusz, Piotr [1 ,2 ]
机构
[1] CSIRO, Data61, Canberra, Australia
[2] Australian Natl Univ, Canberra, Australia
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022 | 2022年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. COLES, a recent graph contrastive method combines traditional graph embedding and negative sampling into one framework. COLES in fact minimizes the trace difference between the within-class scatter matrix encapsulating the graph connectivity and the total scatter matrix encapsulating negative sampling. In this paper, we propose a more essential framework for graph embedding, called Generalized Laplacian EigeNmaps (GLEN), which learns a graph representation by maximizing the rank difference between the total scatter matrix and the within-class scatter matrix, resulting in the minimum class separation guarantee. However, the rank difference minimization is an NP-hard problem. Thus, we replace the trace difference that corresponds to the difference of nuclear norms by the difference of LogDet expressions, which we argue is a more accurate surrogate for the NP-hard rank difference than the trace difference. While enjoying a lesser computational cost, the difference of LogDet terms is lower-bounded by the Affine-invariant Rviemannian metric (AIRM) and upper-bounded by AIRM scaled by the factor of root m. We show on popular benchmarks/backbones that GLEN offers favourable accuracy/scalability compared to state-of-the-art baselines.
引用
收藏
页数:15
相关论文
共 57 条
  • [1] Abu-El-Haifa S, 2019, PR MACH LEARN RES, V97
  • [2] [Anonymous], 2016, Advances in Neural Information Processing Systems
  • [3] [Anonymous], 2010, Caltech-ucsd birds 200
  • [4] Bachman P, 2019, ADV NEUR IN, V32
  • [5] Laplacian eigenmaps for dimensionality reduction and data representation
    Belkin, M
    Niyogi, P
    [J]. NEURAL COMPUTATION, 2003, 15 (06) : 1373 - 1396
  • [6] Bojchevski Aleksandar, 2018, P ICLR
  • [7] A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications
    Cai, HongYun
    Zheng, Vincent W.
    Chang, Kevin Chen-Chuan
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2018, 30 (09) : 1616 - 1637
  • [8] Multi-label learning for concept-oriented labels of product image data
    Dai, Yong
    Li, Yi
    Li, Shu-Tao
    [J]. IMAGE AND VISION COMPUTING, 2020, 93
  • [9] Deng Chenhui, 2020, INT C LEARN REPR
  • [10] Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices
    Fazel, M
    Hindi, H
    Boyd, SP
    [J]. PROCEEDINGS OF THE 2003 AMERICAN CONTROL CONFERENCE, VOLS 1-6, 2003, : 2156 - 2162