We design accelerated algorithms with improved rates for several fundamental classes of optimization problems. Our algorithms all build upon techniques related to the analysis of primal-dual extragradient methods via relative Lipschitzness proposed recently by Cohen et al. (2021). (1) Separable minimax optimization. We study separable minimax optimization problems of the form min(x) max(y) f(x) g(y) + h(x; y), where f and g have smoothness and strong convexity parameters (L-x; mu(x)), (L-y; mu(y)), and h is convex-concave with a (Lambda(xx); Lambda(xy), Lambda(yy))-blockwise operator norm bounded Hessian. We provide an algorithm using (O) over tilde (root L-x/mu(x) + root L-y/mu(y) + Lambda(xx)mu(x) + Lambda(xy)/root mu(x)mu(y) + Lambda(yy)/mu(y)) gradient queries. Notably, for convex-concave minimax problems with bilinear coupling (e.g. quadratics), where Lambda(xx) = Lambda(yy) = 0, our rate matches a lower bound of Zhang et al. (2019). (2) Finite sum optimization. We study finite sum optimization problems of the form min(x) 1/n Sigma(i is an element of[n]) f(i)(x), where each fi is Li -smooth and the overall problem is mu-strongly convex. We provide an algorithm using (O) over tilde (n + Sigma(i is an element of[n]) root L-i/n mu) gradient queries. Notably, when the smoothness bounds {L-i}(i is an element of[n]) are non-uniform, our rate improves upon accelerated SVRG (Lin et al., 2015; Frostig et al., 2015) and Katyusha (Allen-Zhu, 2017) by up to a p n factor. (3) Minimax finite sums. We generalize our algorithms for minimax and finite sum optimization to solve a natural family of minimax finite sum optimization problems at an accelerated rate, encapsulating both above results up to a logarithmic root n factor.