Embedding Graph Auto-Encoder for Graph Clustering

被引:60
作者
Zhang, Hongyuan [1 ,2 ]
Li, Pei [1 ,2 ]
Zhang, Rui [2 ]
Li, Xuelong [2 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Peoples R China
[2] Northwestern Polytech Univ, Sch Artificial Intelligence Optic & Elect, Xian 710072, Peoples R China
基金
中国国家自然科学基金;
关键词
Convolution; Neural networks; Decoding; Task analysis; Laplace equations; Spectral analysis; Training; Graph auto-encoder (GAE); graph clustering; inner-product distance; relaxed k-means; unsupervised representation learning;
D O I
10.1109/TNNLS.2022.3158654
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph clustering, aiming to partition nodes of a graph into various groups via an unsupervised approach, is an attractive topic in recent years. To improve the representative ability, several graph auto-encoder (GAE) models, which are based on semisupervised graph convolution networks (GCN), have been developed and they have achieved impressive results compared with traditional clustering methods. However, all existing methods either fail to utilize the orthogonal property of the representations generated by GAE or separate the clustering and the training of neural networks. We first prove that the relaxed k-means will obtain an optimal partition in the inner-product distance used space. Driven by theoretical analysis about relaxed k-means, we design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding GAE (EGAE). The learned representations are well explainable so that the representations can be also used for other tasks. To induce the neural network to produce deep features that are appropriate for the specific clustering model, the relaxed k-means and GAE are learned simultaneously. Meanwhile, the relaxed k-means can be equivalently regarded as a decoder that attempts to learn representations that can be linearly constructed by some centroid vectors. Accordingly, EGAE consists of one encoder and dual decoders. Extensive experiments are conducted to prove the superiority of EGAE and the corresponding theoretical analyses.
引用
收藏
页码:9352 / 9362
页数:11
相关论文
共 44 条
[1]  
Abu-El-Haija Sami, 2019, ICML, P21
[2]  
[Anonymous], 2014, 2 INT C LEARNING REP
[3]  
Bojchevski A., 2018, ARXIV180300816
[4]  
Cao SS, 2016, AAAI CONF ARTIF INTE, P1145
[5]  
Defferrard M, 2016, ADV NEUR IN, V29
[6]  
Dunn J.C., 1973, J CYBERNETICS, V3, P32, DOI [10.1080/ 01969727308546046, DOI 10.1080/01969727308546046, 10.1080/01969727308546046]
[7]   Optimal Wireless Resource Allocation With Random Edge Graph Neural Networks [J].
Eisen, Mark ;
Ribeiro, Alejandro .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 :2977-2991
[8]  
Ester M., 1996, P 2 INT C KNOWL DISC, P226, DOI DOI 10.5555/3001460.3001507
[9]   Clustering by passing messages between data points [J].
Frey, Brendan J. ;
Dueck, Delbert .
SCIENCE, 2007, 315 (5814) :972-976
[10]   DEEP LEARNING BASED ENERGY EFFICIENCY OPTIMIZATION FOR DISTRIBUTED COOPERATIVE SPECTRUM SENSING [J].
He, Haibo ;
Jiang, He .
IEEE WIRELESS COMMUNICATIONS, 2019, 26 (03) :32-39