Anomaly detection is an important task to identify rare events such as fraud, intrusions, or medical diseases. However, it often needs to be applied on personal or otherwise sensitive data, e.g. business data. This gives rise to concerns regarding the protection of the sensitive data, especially if it is to be analysed by third parties, e.g. in collaborative settings, where data is collected by different entities, but shall be analysed together to benefit from more effective models. Besides various approaches for e.g. data anonymisation, one approach for privacy-preserving data mining is Federated Learning - especially in settings where data is collected in several distributed locations. A common, global model is obtained by aggregating models trained locally on each data source, while the training data remains at the source. Therefore, data privacy and machine learning can coexist in a decentralised system. While Federated Learning has been studied for several machine learning settings, such as classification, it is still rather unexplored for anomaly detection tasks. As anomalies are rare, they are not picked up easily by a detection method, and the representation in the model dedicated to recognise them might be lost during model aggregation. In this paper, we thus study anomaly detection task on two different benchmark datasets, in supervised, semi-supervised, and unsupervised settings. We federate Multi-Layer Perceptrons, Gaussian Mixture Models, and Isolation Forests, and compare them to a centralised approach.