Graph clustering is an important unsupervised learning task in complex network analysis and its latest progress mainly relies on a graph autoencoder (GAE) model. However, these methods have three major drawbacks. (1) Most autoencoder models choose graph convolutional networks (GCNs) as their encoders, but the filters and weight matrices in GCN encoders are entangled, which affects the resulting representation performance. (2) Real graphs are often sparse, requiring multiple-layer propagation to generate effective features, but (GCN) encoders are prone to oversmoothing when multiple layers are stacked. (3) Existing methods ignore the distribution of the node features in the feature space during the embedding stage, making their results unsuitable for clustering tasks. To alleviate these problems, in this paper, we propose a novel graph Laplacian autoencoder with subspace clustering regularization for graph clustering (GLASS). Specifically, we first use Laplacian smoothing filters instead of GCNs for feature propagation and multilayer perceptrons (MLPs) for nonlinear transformations, thereby solving the entanglement between convolutional filters and weight matrices. Considering that multilayer propagation is prone to oversmoothing, we further add residual connections between the Laplacian smoothing filters to enhance the multilayer feature propagation capability of GLASS. In addition, to achieve improved clustering performance, we introduce a regular term for subspace clustering to constrain the autoencoder to obtain the node features that are more representative and suitable for clustering. Experiments on node clustering and image clustering using four widely used network datasets and three image datasets show that our method outperforms other existing state-of-the-art methods. In addition, we verify the effectiveness of the proposed method in link prediction, complexity analysis, parameter analysis, data visualization, and ablation studies. The experimental results demonstrate the effectiveness of our proposed GLASS approach, and that it overcomes the shortcomings of GCN encoders to a large extent. This method not only has the advantage of deeper graph encoding but can also adaptively fit the subspace distribution of the given data, which will effectively inspire research on neural networks and autoencoders.