When to Use Standardization and Normalization: Empirical Evidence From Machine Learning Models and XAI

被引:19
作者
Sujon, Khaled Mahmud [1 ]
Hassan, Rohayanti Binti [2 ]
Towshi, Zeba Tusnia [3 ]
Othman, Manal A. [4 ]
Samad, Md Abdus [5 ]
Choi, Kwonhue [5 ]
机构
[1] Univ Teknol Malaysia UTM, Fac Comp, Dept Software Engn, Johor Baharu 81310, Johor, Malaysia
[2] Univ Teknol Malaysia UTM, Fac Comp, Johor Baharu 81310, Johor, Malaysia
[3] Independent Univ, Dept Comp Sci & Engn, Dhaka 1229, Bangladesh
[4] Princess Nourah Bint Abdulrahman Univ, Coll Med, Med Educ Dept, Biomed Informat, Riyadh 11671, Saudi Arabia
[5] Yeungnam Univ, Dept Informat & Commun Engn, Gyongsan 38541, South Korea
关键词
Standardization; normalization; feature scaling; data preprocessing; machine learning; explainable AI (XAI);
D O I
10.1109/ACCESS.2024.3462434
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Optimizing machine learning (ML) model performance relies heavily on appropriate data preprocessing techniques. Despite the widespread use of standardization and normalization, empirical comparisons across different models, dataset sizes, and domains remain sparse. This study bridges this gap by evaluating five machine learning algorithms- Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Adaptive Boosting (AdaBoost)-on datasets of varying sizes from the business, health, and agriculture domains. This study assessed the models without scaling, with standardized data, and with normalized data. The comparative analysis reveals that while standardization consistently improves the performance of linear models like SVM and LR for large and medium datasets, normalization enhances the performance of linear models for small datasets. Moreover, this study employs SHapley Additive exPlanations (SHAP) summary plots to understand how each feature contributes to the model's performance interpretability with unscaled and scaled datasets. This study provides practical guidelines for selecting appropriate scaling techniques based on the characteristics of datasets and compatibility with various algorithms. Finally, this investigation laid the foundation for data preprocessing and feature engineering across diverse models and domains which offers actionable insights for practitioners.
引用
收藏
页码:135300 / 135314
页数:15
相关论文
共 38 条
[1]  
AadarshVelu, 2021, AIDS Virus Infection Prediction Dataset
[2]   An Investigation on Disparity Responds of Machine Learning Algorithms to Data Normalization Method [J].
Ahmed, Haval A. ;
Ali, Peshawa J. Muhammad ;
Faeq, Abdulbasit K. ;
Abdullah, Saman M. .
ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY, 2022, 10 (02) :29-37
[3]   Data preprocessing in predictive data mining [J].
Alexandropoulos, Stamatios-Aggelos N. ;
Kotsiantis, Sotiris B. ;
Vrahatis, Michael N. .
KNOWLEDGE ENGINEERING REVIEW, 2019, 34
[4]   On the effects of data normalization for domain adaptation on EEG data [J].
Apicella, Andrea ;
Isgro, Francesco ;
Pollastro, Andrea ;
Prevete, Roberto .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 123
[5]  
Arifin WA., 2021, J Teknol dan Sist Komput, V10, P26, DOI [10.14710/jtsiskom.2021.14105, DOI 10.14710/JTSISKOM.2021.14105]
[6]   Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI) [J].
Aslam, Nida ;
Khan, Irfan Ullah ;
Mirza, Samiha ;
AlOwayed, Alanoud ;
Anis, Fatima M. ;
Aljuaid, Reef M. ;
Baageel, Reham .
SUSTAINABILITY, 2022, 14 (12)
[7]  
Assegie T. A., 2023, B ELECT ENG INF, V12, P1833
[8]   MODEL-BASED GAUSSIAN AND NON-GAUSSIAN CLUSTERING [J].
BANFIELD, JD ;
RAFTERY, AE .
BIOMETRICS, 1993, 49 (03) :803-821
[9]  
BlueLoki, 2020, Synthetic Agricultural Yield Prediction Dataset
[10]  
Butwall M., 2021, Int. J. Comput. Appl, V183, P6, DOI [10.5120/ijca2021921669, DOI 10.5120/IJCA2021921669]