In the context of process mining, alignments are increasingly being adopted for conformance checking, due to their ability in providing sophisticated diagnostics on the nature and extent of deviations between observed traces and a reference process model. On the downside, deriving alignments is challenging from the computational point of view, even more so when dealing with multiple perspectives in the process, such as, in particular, data. In fact, every observed trace must in principle be compared with infinitely many model traces. In this work, we tackle this computational bottleneck by borrowing the classical idea of encoding from machine learning. Instead of computing alignments directly and exactly, we do so in an approximate way after applying a lossy trace encoding that maps each trace into a corresponding compact, vectorial representation that retains only certain information of the original trace. We study trace encoding-based approximate alignments for processes equipped with event data attributes, from three different angles. First, we indeed show that computing approximate alignments in this way is much more efficient than in the exact setting. Second, we evaluate how accurate such approximate alignments are, considering different encoding strategies that focus on different features of the trace. Our findings suggest that sufficiently rich encodings actually yield good accuracy. Third, we consider the impact of frequency and density of model variants, comparing the effectiveness of using standard approximate multi-perspective alignments as opposed to a variant that incorporates probabilities. As a by-product of this analysis, we also obtain insights on how these two approaches perform in the presence of noise.