The aim of few-shot learning(FSL) is to improve the generalization ability of a learning model, so that new categories can be classified with a small number of available samples. It leads to significant reduction in both annotation and model training cost. Most of the existing metric learning methods pay attention only to find an appropriate metric space rather than improving the discrimination of the feature vectors. It is important to get maximum information when the number of samples is very few. Based on the representation capacity of different feature maps, a channel-based self-attention method is proposed to improve the discrimination of various class samples by weighting the more important feature map with a big value. Apart from that, the conception of space prototype is proposed to explore the information easily obtained from the samples. Meanwhile, inspired by auto-encoder, a method leveraging the information from all the samples is proposed. In this method, the class of prototypes are modified or altered to improve their accuracy. As a parameter-free augmented feature extractor, the proposed self-attention method alleviates the over-fitting problem that widely exists in FSL. This proposed self-attention method is a generalized and compatible one with many existing FSL methods in improving the performance. Compared to prototypical networks, the results on two few-shot learning benchmarks miniImagenet and CUB of three classification settings are improved when the proposed methods are applied on it. Specifically, when the training set largely differs from the test set, the proposed method results in absolute performance improvement by 10.23% and relative improvement by 17.04%. © 2021, Editorial Board of Journal of Tianjin University(Science and Technology). All right reserved.