Evolutionary optimization of the area under precision-recall curve for classifying imbalanced multi-class data

被引:0
|
作者
Chabbouh, Marwa [1 ]
Bechikh, Slim [2 ]
Mezura-Montes, Efren [3 ]
Ben Said, Lamjed [1 ]
机构
[1] Univ Tunis, SMART Lab, ISG Campus,Liberty St, Tunis 2000, Tunisia
[2] Univ Tunis, SMART Lab, IEEE SM, ISG Campus,Liberty St, Tunis 2000, Tunisia
[3] Univ Veracruz, Artificial Intelligence Res Inst, Calle Paseo 112, Xalapa 91097, Veracruz, Mexico
关键词
Multi-class classification; Imbalanced data; Genetic-based machine learning; Area under precision-recall curve; DECISION TREES; STATISTICAL COMPARISONS; DATA-SETS; CLASSIFICATION; CLASSIFIERS; ALGORITHM; PERFORMANCE; FRAMEWORK; ENSEMBLES;
D O I
10.1007/s10732-024-09544-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Classification of imbalanced multi-class data is still so far one of the most challenging issues in machine learning and data mining. This task becomes more serious when classes containing fewer instances are located in overlapping regions. Several approaches have been proposed through the literature to deal with these two issues such as the use of decomposition, the design of ensembles, the employment of misclassification costs, and the development of ad-hoc strategies. Despite these efforts, the number of existing works dealing with the imbalance in multi-class data is much reduced compared to the case of binary classification. Moreover, existing approaches still suffer from many limits. These limitations include difficulties in handling imbalances across multiple classes, challenges in adapting sampling techniques, limitations of certain classifiers, the need for specialized evaluation metrics, the complexity of data representation, and increased computational costs. Motivated by these observations, we propose a multi-objective evolutionary induction approach that evolves a population of NLM-DTs (Non-Linear Multivariate Decision Trees) using the theta\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta $$\end{document}-NSGA-III (theta\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta $$\end{document}-Non-dominated Sorting Genetic Algorithm-III) as a search engine. The resulting algorithm is termed EMO-NLM-DT (Evolutionary Multi-objective Optimization of NLM-DTs) and is designed to optimize the construction of NLM-DTs for imbalanced multi-class data classification by simultaneously maximizing both the Macro-Average-Precision and the Macro-Average-Recall as two possibly conflicting objectives. The choice of these two measures as objective functions is motivated by a recent study on the appropriateness of performance metrics for imbalanced data classification, which suggests that the mAURPC (mean Area Under Recall Precision Curve) satisfies all necessary conditions for imbalanced multi-class classification. Moreover, the NLM-DT adoption as a baseline classifier to be optimized allows the generation non-linear hyperplanes that are well-adapted to the classes 'boundaries' geometrical shapes. The statistical analysis of the comparative experimental results on more than twenty imbalanced multi-class data sets reveals the outperformance of EMO-NLM-DT in building NLM-DTs that are highly effective in classifying imbalanced multi-class data compared to seven relevant and recent state-of-the-art methods.
引用
收藏
页数:66
相关论文
共 50 条
  • [31] Efficient DANNLO classifier for multi-class imbalanced data on Hadoop
    Satyanarayana S.
    Tayar Y.
    Prasad R.S.R.
    International Journal of Information Technology, 2019, 11 (2) : 321 - 329
  • [32] Selecting local ensembles for multi-class imbalanced data classification
    Krawczyk, Bartosz
    Cano, Alberto
    Wozniak, Michal
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [33] Global-local information based oversampling for multi-class imbalanced data
    Han, Mingming
    Guo, Husheng
    Li, Jinyan
    Wang, Wenjian
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (06) : 2071 - 2086
  • [34] Multi-class and feature selection extensions of Roughly Balanced Bagging for imbalanced data
    Lango, Mateusz
    Stefanowski, Jerzy
    JOURNAL OF INTELLIGENT INFORMATION SYSTEMS, 2018, 50 (01) : 97 - 127
  • [35] Combined Cleaning and Resampling algorithm for multi-class imbalanced data with label noise
    Koziarski, Michal
    Wozniak, Michal
    Krawczyk, Bartosz
    KNOWLEDGE-BASED SYSTEMS, 2020, 204 (204)
  • [36] A regularized ensemble framework of deep learning for cancer detection from multi-class, imbalanced training data
    Yuan, Xiaohui
    Xie, Lijun
    Abouelenien, Mohamed
    PATTERN RECOGNITION, 2018, 77 : 160 - 172
  • [37] GMMSampling: a new model-based, data difficulty-driven resampling method for multi-class imbalanced data
    Naglik, Iwo
    Lango, Mateusz
    MACHINE LEARNING, 2024, 113 (08) : 5183 - 5202
  • [38] SCALA: Scaling algorithm for multi-class imbalanced classification A novel algorithm specifically designed for multi-class multiple minority imbalanced data problems.
    Barzinji, Ala O.
    Ma, Jixin
    Ma, Chaoying
    PROCEEDINGS OF 2023 8TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING TECHNOLOGIES, ICMLT 2023, 2023, : 68 - 73
  • [39] A Hybrid Sampling Approach for Imbalanced Binary and Multi-Class Data Using Clustering Analysis
    Palli, Abdul Sattar
    Jaafar, Jafreezal
    Hashmani, Manzoor Ahmed
    Gomes, Heitor Murilo
    Gilal, Abdul Rehman
    IEEE ACCESS, 2022, 10 : 118639 - 118653
  • [40] Parameter-free classification in multi-class imbalanced data sets
    Cerf, Loic
    Gay, Dominique
    Selmaoui-Folcher, Nazha
    Cremilleux, Bruno
    Boulicaut, Jean-Francois
    DATA & KNOWLEDGE ENGINEERING, 2013, 87 : 109 - 129