In recent years, the Transformer module has gradually been widely used in various visual tasks such as image classification, target detection and segmentation. Unlike traditional convolutional neural networks (CNNs), Transformer efficiently captures global dependencies in images through a self-attention mechanism. With this mechanism, Transformer has made significant progress in various visual tasks. Although many studies have shown that Transformer-based models perform well in data fitting. However, too many parameters can lead to overfitting and computationally intensive models. To address these issues, we introduce a truncated SVD technique to provide a low-rank approximation to the generated feature matrix. We derive truncation formulas applicable to the Transformer feature matrix and design a new architecture, SVDFormer. In addition, we propose the new attention SVD-Attention, a lightweight alternative implemented through sparsification, with complexity reduced from quadratic to linear. Experiments show a significant improvement of the SVDFormer compared to the standard transformer. An accuracy of 98.13%, 87.42% and 89.69% is obtained in image classification tasks on datasets Cifar-10, Cifar-100 and ImageNet100, respectively.