Systems based on i-vectors represent the current state-of-the-art in text-independent speaker recognition. In this work we introduce a new compact representation of a speech segment, similar to the speaker factors of Joint Factor Analysis (JFA) and to i-vectors, that we call "e-vector". The e-vectors derive their name from the eigen-voice space of the JFA speaker modeling approach. Our working hypothesis is that JFA estimates a more informative speaker subspace than the "total variability" i-vector subspace, because the latter is obtained by considering each training segment as belonging to a different speaker. We propose, thus, a simple "i-vector style" modeling and training technique that exploits this observation, and estimates a more accurate subspace with respect to the one provided by the classical i-vector approach, as confirmed by the results of a set of tests performed on the extended core NIST 2012 Speaker Recognition Evaluation dataset. Simply replacing the i-vectors with e-vectors we get approximately 10% average improvement ofthe C-primary cost function, using different systems and classifiers. These performance gains come without any additional memory or computational costs with respect to the standard i-vector systems.