The Self-Organizing Map (SOM) has the special property to create effectively a spatially organized internal representations of various features of input vectors. SOM is widely used for clustering and classification. In its classical training procedure if there exists more than one closest neuron the algorithm chooses the first one and ignores the others. This leads to a loss of the possibilities to train the network faster and to use more effectively and completely the information contained in the data. The idea of this paper is to cope with this problem and to correct all neurons belonging to the neighborhoods of all closest neurons. The correction of the placement of the neighbor neuron is done once no matter how many times it falls in a closest neuron neighborhoods. A first attempt for generalized nets-modelling of the training procedure of the well-known SOM is done. This model is universal for such training algorithms and enables their better understanding.