Consider a compact metric space S and a pair (j,k) with k >= 2 and 1 <= j <= k. For any probability distribution theta is an element of P(S), define a Markov chain on S by: from state s, take k i.i.d. (theta) samples, and jump to the j'th closest. Such a chain converges in distribution to a unique stationary distribution, say pi(j,k)(theta). So this defines a mapping pi(j,k ): P(S)-> P(S). What happens when we iterate this mapping? In particular, what are the fixed points of this mapping? We present a few rigorous results, to complement our extensive simulation study elsewhere.