Anomaly detection based on generative models usually uses the reconstruction loss of samples for anomaly discrimination. However, there are two problems in semi-supervised or unsupervised learning. One is that the generalizing ability of the generator is too strong, which may reduce the reconstruction loss of some outliers. The other is that the background statistics will interfere with the reconstruction loss of outliers. Both of them will reduce the effectiveness of anomaly detection. In this paper, we propose an anomaly detection method called MHMA (Multi-headed Memory Autoencoder). The variational autoencoder is used as the generation model, and the vector in potential space is limited by the memory module, which increases the reconstruction error of abnormal samples. Moreover, the MHMA uses the multi-head structure to divide the last layer of the decoder into multiple branches to learn and generate a diverse sample distribution, which keeps the generalization capability of the model within a reasonable range. In the process of calculating outliers, a likelihood ratio method is employed to obtain correct background statistics according to the background model, thus enhancing the specific features in the reconstructed samples. The effectiveness and universality of MHMA are tested on different types of datasets, and the results show that the model achieves 99.5% recall, 99.9% precision, 99.69% F1 and 98.12% MCC on the image dataset and it achieves 98.61% recall, 98.73% precision, 98.67% F1 and 95.82% MCC on the network security dataset.