In this work, we consider the problem of steering the first two moments of the uncertain state of an unknown discrete-time stochastic nonlinear system to a given terminal distribution in finite time. Toward that goal, first, a non-parametric predictive model is learned from a set of available training data points using stochastic variational Gaussian process regression: a powerful and highly scalable machine learning tool for learning distributions over arbitrary nonlinear functions. Second, we formulate a tractable nonlinear covariance steering algorithm that utilizes the learned Gaussian process predictive model to compute a feedback policy that will drive the distribution of the state of the system close to the goal distribution. In a greedy approach, we linearize the Gaussian process model at each time step around the latest predicted mean and covariance, solve the linear covariance steering problem, and propagate the state statistics to the next time step using the unscented transform. This process is then repeated in a shrinking-horizon model predictive control fashion. The cautiousness of the Gaussian process predictive model, which captures both the process noise and modeling errors, is demonstrated in numerical simulations. Copyright (C) 2021 The Authors.