GLASS: A Graph Laplacian Autoencoder with Subspace Clustering Regularization for Graph Clustering

被引:0
作者
Dengdi Sun
Liang Liu
Bin Luo
Zhuanlian Ding
机构
[1] Anhui University,Key Laboratory of Intelligent Computing & Signal Processing (ICSP), Ministry of Education, School of Artificial Intelligence
[2] Hefei Comprehensive National Science Center,Institute of Artificial Intelligence
[3] Anhui University,Anhui Provincial Key Laboratory of Multimodal Cognitive Computing, School of Computer Science and Technology
[4] Anhui University,School of Internet
来源
Cognitive Computation | 2023年 / 15卷
关键词
Graph clustering; Graph autoencoder; Graph neural networks; Graph representation learning;
D O I
暂无
中图分类号
学科分类号
摘要
Graph clustering is an important unsupervised learning task in complex network analysis and its latest progress mainly relies on a graph autoencoder (GAE) model. However, these methods have three major drawbacks. (1) Most autoencoder models choose graph convolutional networks (GCNs) as their encoders, but the filters and weight matrices in GCN encoders are entangled, which affects the resulting representation performance. (2) Real graphs are often sparse, requiring multiple-layer propagation to generate effective features, but (GCN) encoders are prone to oversmoothing when multiple layers are stacked. (3) Existing methods ignore the distribution of the node features in the feature space during the embedding stage, making their results unsuitable for clustering tasks. To alleviate these problems, in this paper, we propose a novel graph Laplacian autoencoder with subspace clustering regularization for graph clustering (GLASS). Specifically, we first use Laplacian smoothing filters instead of GCNs for feature propagation and multilayer perceptrons (MLPs) for nonlinear transformations, thereby solving the entanglement between convolutional filters and weight matrices. Considering that multilayer propagation is prone to oversmoothing, we further add residual connections between the Laplacian smoothing filters to enhance the multilayer feature propagation capability of GLASS. In addition, to achieve improved clustering performance, we introduce a regular term for subspace clustering to constrain the autoencoder to obtain the node features that are more representative and suitable for clustering. Experiments on node clustering and image clustering using four widely used network datasets and three image datasets show that our method outperforms other existing state-of-the-art methods. In addition, we verify the effectiveness of the proposed method in link prediction, complexity analysis, parameter analysis, data visualization, and ablation studies. The experimental results demonstrate the effectiveness of our proposed GLASS approach, and that it overcomes the shortcomings of GCN encoders to a large extent. This method not only has the advantage of deeper graph encoding but can also adaptively fit the subspace distribution of the given data, which will effectively inspire research on neural networks and autoencoders.
引用
收藏
页码:803 / 821
页数:18
相关论文
共 49 条
[1]  
Newman MEJ(2012)Communities, modules and large-scale structure in networks Nat Phys 8 25-31
[2]  
Sumeyye B(2020)Semantic analysis on social networks: a survey Int J Commun Syst 33 4424-212
[3]  
Ibrahim Y(1996)Airline network design and hub location problems Locat Sci 4 195-12
[4]  
Mehmet S(2015)Subspace based network community detection using sparse linear coding IEEE Trans Knowl Data Eng 28 801-82
[5]  
Ibrahim D(2018)Low-rank subspace learning based network community detection Knowl-Based Syst 155 71-6
[6]  
Jaillet P(2002)Community structure in social and biological networks Proc Natl Acad Sci 99 7821-41
[7]  
Song G(2008)Fast unfolding of communities in large networks J Stat Mech: Theory Exp 2008 10008-82
[8]  
Yu G(2007)Resolution limit in community detection Proc Natl Acad Sci 104 36-59
[9]  
Mahmood A(2015)Combined node and link partitions method for finding overlapping communities in complex networks Sci Rep 5 8600-58
[10]  
Small M(1999)Mixtures of probabilistic principal component analyzers Neural Comput 11 443-81