We propose a latent variable model to discover faithful low-dimensional representations of high-dimensional data. The model computes a low-dimensional embedding that aims to preserve neighborhood relationships encoded by a sparse graph. The model both leverages and extends current leading approaches to this problem. Like t-distributed Stochastic Neighborhood Embed-ding, the model can produce two-and three-dimensional embed-dings for visualization, but it can also learn higher-dimensional embeddings for other uses. Like LargeVis and Uniform Mani-fold Approximation and Projection, the model produces embed-dings by balancing two goals-pulling nearby examples closer together and pushing distant examples further apart. Unlike these approaches, however, the latent variables in our model pro-vide additional structure that can be exploited for learning. We derive an Expectation-Maximization procedure with closed-form updates that monotonically improve the model's likelihood: In this procedure, embeddings are iteratively adapted by solving sparse, diagonally dominant systems of linear equations that arise from a discrete graph Laplacian. For large problems, we also develop an approximate coarse-graining procedure that avoids the need for negative sampling of nonadjacent nodes in the graph. We demonstrate the model's effectiveness on datasets of images and text.