Dynamical cell structures (DCS)neural networks are self organizing and can grow their architecture depending on the problem at hand making them an excellent tool for the on line aircraft control, clustering and numerous other applications(1,2,3,4,5). Their performance frequently exceeds that of multilayer perceptrons and that of Kohonen type of neural networks. However, the proof of convergence of the learning algorithm was lacking. This paper provides a rigorous proof and shows how DCS can be applied to prediction of time series with applications to embedding dimensions, and financial time series and smoothing of data. We compare this method with classical linear filter approach. Genetic algorithms were used to optimize the objective function with respect to learning parameters.