Decision tree classification is one of the most practical and effective methods which is used in inductive learning. Many, different approaches, which are usually, used for decision making and prediction, have been invented to construct decision tree classifiers. These approaches try to optimize parameters such as accuracy, speed of classification. size of constructed trees, learning speed, and the amount of used memory. There is a trade off between these parameters. That is to say that optimization of one may cause obstruction in the other, hence all existing approaches try. to establish equilibrium. In this study, considering the effect of the whole data set on class assigning of any, data, we propose a new approach to construct not perfectly accurate, but less complex trees in a short time, using small amount of memory. To achieve this purpose, a multi-step process has been used. We trace the training data set twice in any, step. from the beginning to the end and vice versa, to extract the class pattern for attribute selection. Using the selected attribute, we make new branches in the tree. After making branches, the selected attribute and some records of training data set are deleted at the end of any, step. This process continues alternatively in several steps for remaining data and attributes until the tree is completely constructed. In order to have an optimized tree the parameters which we use in this algorithm are optimized using genetic algorithms. In order to compare this new approach with previous ones we used some known data sets which have been used in different researches. This approach has been compared with others based on the classification accuracy, and also the decision tree size. Experimental results show that it is efficient to use this approach particularly in cases of massive data sets, memory. restrictions or short learning time.