In solving nonlinear inverse-scattering problems an iterative approach is usually required, particularly if the nonlinearity is strong. A typical approach is to use a descent algorithm to minimize a global error norm, such as the mean-square error, to arrive at a set of parameters that best predicts the data based on an assumed model of the unknown scattering object. In minimizing the error norm, the possibility of converging to a local minimum can lead to false solutions. A different strategy is proposed that is not based on gradient descent which, at least in principle, should avoid local minima and converge to the true solution (the global minimum). A simple inverse-scattering problem is used as an illustration. © 2005 Acoustical Society of America.