A technique introduced by Indyk and Woodruff (STOC 2005) has inspired several recent advances in data-stream algorithms. We show that a number of these results follow easily from the application of a single probabilistic method called Precision Sampling. Using this method, we obtain simple data-stream algorithms that maintain a randomized sketch of an input vector x = (x(1), x(2), ... , x(n)), which is useful for the following applications: Estimating the F-k-moment of x, for k > 2. Estimating the l(p)-norm of x, for p is an element of [1, 2], with small update time. Estimating cascaded norms l(p)(l(q)) for all p, q > 0. l(1) sampling, where the goal is to produce an element i with probability (approximately) vertical bar x(i)vertical bar/parallel to x parallel to(1). It extends to similarly defined l(p)-sampling, for p is an element of [1, 2]. For all these applications the algorithm is essentially the same: scale the vector x entry-wise by a well-chosen random vector, and run a heavy-hitter estimation algorithm on the resulting vector. Our sketch is a linear function of x, thereby allowing general updates to the vector x. Precision Sampling itself addresses the problem of estimating a sum Sigma(n)(i=1) a(i) from weak estimates of each real a(i) is an element of [0, 1]. More precisely, the estimator first chooses a desired precision u(i) is an element of (0, 1] for each i is an element of [n], and then it receives an estimate of every a(i) within additive ui. Its goal is to provide a good approximation to Sigma a(i) while keeping a tab on the "approximation cost" Sigma(i) (1/u(i)). Here we refine previous work (Andoni, Krauthgamer, and Onak, FOCS 2010) which shows that as long as Sigma a(i) = Omega(1), a good multiplicative approximation can be achieved using total precision of only O(n log n).