We prove the following generalization of the Entropy Power Inequality: h(A (x) under bar) greater than or equal to h(A (x) over tilde<under bar> where h(.) denotes (joint-) differential-entropy, (x) under bar = x(1)...x(n) is a random vector with independent components, (x) over tilde<under bar> = (x) over tilde(1)...(x) over tilde(n), is a Gaussian vector with independent components such that h((x) over tilde)(i) = h(x(i)), i = 1...n, and A is any matrix. This generalization of the entropy-power inequality is applied to show that a non-Gaussian vector with independent components becomes ''closer'' to Gaussianity after a linear transformation, where the distance to Gaussianity is measured by the information divergence. Another application is a lower bound, greater than zero, for the mutual-information between nonoverlapping spectral components of a non-Gaussian white process. Finally, we describe a dual generalization of the Fisher Information Inequality.