The so-called accelerated convergence is an ingenuous idea to improve the asymptotic accuracy in stochastic approximation (gradient based) algorithms. The estimates obtained from the basic algorithm are subjected to a second round of averaging, which leads to optimal accuracy for estimates of time-invariant parameters. In this contribution, some simple calculations are used to get some intuitive insight into these mechanisms. Of particular interest is to investigate the properties of accelerated convergence schemes in tracking situations. It is shown that a second round of averaging leads to the recursive least-squares algorithm with a forgetting factor. This also means that in case the true parameters are changing as a random walk, accelerated convergence does not, typically, give optimal tracking properties. Copyright (C) 2001 John Wiley & Sons, Ltd.