When to Use Standardization and Normalization: Empirical Evidence From Machine Learning Models and XAI

被引:19
作者
Sujon, Khaled Mahmud [1 ]
Hassan, Rohayanti Binti [2 ]
Towshi, Zeba Tusnia [3 ]
Othman, Manal A. [4 ]
Samad, Md Abdus [5 ]
Choi, Kwonhue [5 ]
机构
[1] Univ Teknol Malaysia UTM, Fac Comp, Dept Software Engn, Johor Baharu 81310, Johor, Malaysia
[2] Univ Teknol Malaysia UTM, Fac Comp, Johor Baharu 81310, Johor, Malaysia
[3] Independent Univ, Dept Comp Sci & Engn, Dhaka 1229, Bangladesh
[4] Princess Nourah Bint Abdulrahman Univ, Coll Med, Med Educ Dept, Biomed Informat, Riyadh 11671, Saudi Arabia
[5] Yeungnam Univ, Dept Informat & Commun Engn, Gyongsan 38541, South Korea
关键词
Standardization; normalization; feature scaling; data preprocessing; machine learning; explainable AI (XAI);
D O I
10.1109/ACCESS.2024.3462434
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Optimizing machine learning (ML) model performance relies heavily on appropriate data preprocessing techniques. Despite the widespread use of standardization and normalization, empirical comparisons across different models, dataset sizes, and domains remain sparse. This study bridges this gap by evaluating five machine learning algorithms- Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Adaptive Boosting (AdaBoost)-on datasets of varying sizes from the business, health, and agriculture domains. This study assessed the models without scaling, with standardized data, and with normalized data. The comparative analysis reveals that while standardization consistently improves the performance of linear models like SVM and LR for large and medium datasets, normalization enhances the performance of linear models for small datasets. Moreover, this study employs SHapley Additive exPlanations (SHAP) summary plots to understand how each feature contributes to the model's performance interpretability with unscaled and scaled datasets. This study provides practical guidelines for selecting appropriate scaling techniques based on the characteristics of datasets and compatibility with various algorithms. Finally, this investigation laid the foundation for data preprocessing and feature engineering across diverse models and domains which offers actionable insights for practitioners.
引用
收藏
页码:135300 / 135314
页数:15
相关论文
共 38 条
[21]   Dynamic selection of normalization techniques using data complexity measures [J].
Jain, Sukirty ;
Shukla, Sanyam ;
Wadhvani, Rajesh .
EXPERT SYSTEMS WITH APPLICATIONS, 2018, 106 :252-262
[22]  
Kaggle, 2018, Credit Card Fraud Detection dataset
[23]   Trends in big data analytics [J].
Kambatla, Karthik ;
Kollias, Giorgos ;
Kumar, Vipin ;
Grama, Ananth .
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2014, 74 (07) :2561-2573
[24]  
Ke Zhang, 2020, 2020 IEEE 4th Conference on Energy Internet and Energy System Integration (EI2), P711, DOI 10.1109/EI250167.2020.9347147
[25]  
Kwak SK, 2017, KOREAN J ANESTHESIOL, V70, P407, DOI 10.4097/kjae.2017.70.4.407
[26]   Predictability of Belgian residential real estate rents using tree-based ML models and IML techniques [J].
Lenaers, Ian ;
Boudt, Kris ;
De Moor, Lieven .
INTERNATIONAL JOURNAL OF HOUSING MARKETS AND ANALYSIS, 2024, 17 (01) :96-113
[27]  
Li Xiaoxiao, 2020, Med Image Comput Comput Assist Interv, V12261, P792, DOI 10.1007/978-3-030-59710-8_77
[28]   Adaptive Batch Normalization for practical domain adaptation [J].
Li, Yanghao ;
Wang, Naiyan ;
Shi, Jianping ;
Hou, Xiaodi ;
Liu, Jiaying .
PATTERN RECOGNITION, 2018, 80 :109-117
[29]  
Luo Z., 2023, Proc. SPIE, V12594
[30]  
Nyame G., 2020, Academia Educ., V9, P445