Sparse recovery seeks to estimate the support and the non-zero entries of a sparse signal x is an element of R-n from possibly incomplete noisy observations y = Ax(0) +epsilon, with A is an element of R-mxn, m <= n. It has been shown that under various restrictive conditions on the matrix A, the problem can be reduced to the l(1) regularized problem min parallel to x parallel to(1) subject to parallel to Ax - y parallel to(2) < delta, where delta is the size of the error epsilon, and the approximation error is well controlled by delta. A popular method for solving the above minimization problem is the iteratively reweighted least squares algorithm. Here we reformulate the question of sparse recovery as an inverse problem in the Bayesian framework, express the sparsity belief by means of a hierachical prior model and show that the maximum a posteriori (MAP) solution computed by a recently proposed iterative alternating sequential (IAS) algorithm, requiring only the solution of linear systems in the least squares sense, converges linearly to the unique minimum for any matrix A, and quadratically on the complement of the support of the minimizer. The values of the parameters of the hierarchical model are assigned from an estimate of the signal to noise ratio and a priori belief of the degree of sparsity of the underlying signal, and automatically take into account the sensitivity of the data to the different components of x. The approach gives a solid Bayesian interpretation for the commonly used sensitivity weighting in geophysics and biomedical applications. Moreover, since for a suitable choice of sequences of parameters of the hyperprior, the IAS solution converges to the l(1) regularized solution, the Bayesian framework for inverse problems makes the l(1)-magic happen in the l(2) framework.