A concern with iterative optimization techniques is that the algorithm reach the global optimum rather than get stranded at a local optimum value. One method used to try to assure global convergence is the injection of extra noise terms into the recursion, which may allow the algorithm to escape local optimum points. The amplitude of the injected noise is decreased over time (a process called "annealing"), so that the algorithm can finally converge when it reaches the global optimum point. In this context, we examine the performance of a certain "gradient free" stochastic approximation algorithm. We argue that, in some cases, the naturally occurring error in the gradient approximation effectively introduces injected noise that promotes convergence of the algorithm to a global optimum. The discussion is supported by a numerical study.