TDMO: Dynamic multi-dimensional oversampling for exploring data distribution based on extreme gradient boosting learning

被引:7
作者
Jia, Liyan [1 ]
Wang, Zhiping [1 ]
Sun, Pengfei [1 ]
Xu, Zhaohui [2 ]
Yang, Sibo [1 ]
机构
[1] Dalian Maritime Univ, Sch Sci, Dalian 116000, Peoples R China
[2] Dalian Med Univ, Affiliated Hosp 1, Clin Lab Dept, Dalian 116011, Peoples R China
关键词
Class imbalance learning; Data distribution; Oversampling; k -nearest neighbors; SMOTE; RE-SAMPLING METHOD; SMOTE; CLASSIFICATION; MODEL; SVM;
D O I
10.1016/j.ins.2023.119621
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The synthetic minority oversampling technique (SMOTE) is the most general and popular solution for imbalanced data. Although SMOTE is effective in solving the class imbalance problem in most cases, it insufficiently exploits the data prior distribution. Additionally, most existing SMOTE variants randomly produce new instances between a minority sample and its nearest neighbors, which carries the risk of noise propagation. To address this, in this paper, local distribution trust estimation based on extreme gradient boosting (XGBoost) and dynamic multi-dimensional oversampling (TDMO) is proposed as a novel approach to exploring data distributions. First, undersampling and XGBoost techniques are introduced to train multiple balanced subsets to identify the internal structure of the original data and obtain the classification prediction accuracy of each instance, called the confidence level (CL). Then, instances with low CL (i.e., noise) are filtered out, and the densities of the two classes in the neighborhood of the non-noise instances are evaluated to create candidate samples to expand the diversity of the minority class. Finally, the minority class is enhanced by combining multiple samples in a multi-dimensional feature space. Extensive experimental results demonstrate that TDMO outperformed the comparative oversampling methods clearly and obtained the optimal classification results.
引用
收藏
页数:36
相关论文
共 50 条
[31]  
Qinghua Cao, 2011, Proceedings 2011 International Conference on Information Management, Innovation Management and Industrial Engineering (ICIII), P543, DOI 10.1109/ICIII.2011.276
[32]   Discriminatory Label-specific Weights for Multi-label Learning with Missing Labels [J].
Rastogi, Reshma ;
Kumar, Sanjay .
NEURAL PROCESSING LETTERS, 2023, 55 (02) :1397-1431
[33]   Grouping-based Oversampling in Kernel Space for Imbalanced Data Classification [J].
Ren, Jinjun ;
Wang, Yuping ;
Cheung, Yiu-ming ;
Gao, Xiao-Zhi ;
Guo, Xiaofang .
PATTERN RECOGNITION, 2023, 133
[34]   A Systematic Review on Imbalanced Learning Methods in Intelligent Fault Diagnosis [J].
Ren, Zhijun ;
Lin, Tantao ;
Feng, Ke ;
Zhu, Yongsheng ;
Liu, Zheng ;
Yan, Ke .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
[35]   SMOTE-IPF: Addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering [J].
Saez, Jose A. ;
Luengo, Julian ;
Stefanowski, Jerzy ;
Herrera, Francisco .
INFORMATION SCIENCES, 2015, 291 :184-203
[36]   The use of generative adversarial networks to alleviate class imbalance in tabular data: a survey [J].
Sauber-Cole, Rick ;
Khoshgoftaar, Taghi M. .
JOURNAL OF BIG DATA, 2022, 9 (01)
[37]   A hybrid imbalanced classification model based on data density [J].
Shi, Shengnan ;
Li, Jie ;
Zhu, Dan ;
Yang, Fang ;
Xu, Yong .
INFORMATION SCIENCES, 2023, 624 :50-67
[38]   A no-tardiness job shop scheduling problem with overtime consideration and the solution approaches [J].
Shi, Shuangyuan ;
Xiong, Hegen ;
Li, Gongfa .
COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 178
[39]  
Si Chen, 2010, Proceedings of the 2010 IEEE 24th International Conference on Advanced Information Networking and Applications Workshops (WAINA 2010), P599, DOI 10.1109/WAINA.2010.40
[40]   Unbalanced regression sample generation algorithm based on confrontation [J].
Tian, Huixin ;
Tian, Chunzhi ;
Li, Kun ;
Jia, Weinan .
INFORMATION SCIENCES, 2023, 642