Estimation of a sparse signal representation, one with the minimum number of nonzero components, is hard. In this paper, we show that for a nontrivial set of the input data the corresponding optimization problem is equivalent to and can be solved by an algorithm devised for a simpler optimization problem. The simpler optimization problem corresponds to estimation of signals under a low-spread constraint. The goal of the two optimization problems is to minimize the Euclidian norm of the linear approximation error with an I-P penalty on the coefficients, for p 0 (sparse) and p = 1 (low-spread), respectively. The P problem is hard, whereas the I-1 problem can be solved efficiently by an iterative algorithm. Here we precisely define the to optimization problem, construct an associated I-1 optimization problem, and show that for a set with open interior of the input data the optimizers of the two optimization problems have the same support. The associated I-1 optimization problem is used to find the support of the I-0 optimizer. Once the support of the I-0 problem is known, the actual solution is easily found by solving a linear system of equations. However, we point out our approach does not solve the harder optimization problem for all input data and thus may fail to produce the optimal solution in some cases. (c) 2005 Wiley Periodicals, Inc.