In many real-world data sets, the range of values of each data attributes is quite different. Data standardization is a commonly used data preprocessing method which makes the values of data attributes fall within a defined specific range. A number of data standardization methods have been proposed in the literature and it is known that data standardization techniques can influence the performance of learning algorithms. Support vector machines (SVMs) are widely used kernel-based learning algorithms, and the basic idea is to transform data implicitly from the input space to the feature space, where a linear algorithm is used to solve the classification problem. It is obvious that the process of data standardization will change the values of transformed data in the feature space. Therefore it is important to investigate the effect of standardization methods upon the performance of kernel-based classification algorithms. In this research a comparative assessment of data standardization methods is applied to find the effect on the performance of SVM learning algorithm for classification problems. Three simulated data sets and nine real-world data sets (with eight medical data sets) are employed to demonstrate the effect of nine different data standardization methods with two commonly used kernels, Gaussian and polynomial, on the performance of SVM. Accuracy index, type I error, type II error, and two measures, kernel target alignment and class separability measure, are the criteria to evaluate the effect of standardization methods. The experiment results show that a suitable standardization processing has significant improvement on the performance of SVM. On the other hand, a bad choice of standardization method will decrease the classification accuracy of SVM.