FIAO: Feature Information Aggregation Oversampling for imbalanced data classification

被引:1
作者
Wang, Fei [1 ]
Zheng, Ming [1 ,2 ]
Hu, Xiaowen [1 ]
Li, Hongchao [1 ,2 ]
Wang, Taochun [1 ,2 ]
Chen, Fulong [1 ,2 ]
机构
[1] Anhui Normal Univ, Sch Comp & Informat, Wuhu 241002, Peoples R China
[2] Anhui Normal Univ, Anhui Prov Key Lab Ind Intelligence Data Secur, Wuhu 241002, Anhui, Peoples R China
关键词
Imbalanced datasets; Oversampling method; Feature information aggregation; SMOTE;
D O I
10.1016/j.asoc.2024.111774
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Classification performance often deteriorates when machine learning algorithms are trained on imbalanced data. Although oversampling methods have been successfully employed to address imbalanced data, existing approaches have limitations such as information loss, difficulty in parameter selection, and boundary effects when using and calculating nearest neighbors and densities. Therefore, this study introduces a novel oversampling method called Feature Information Aggregation Oversampling (FIAO). FIAO leverages feature information, including feature importance, feature density, and standard deviation, to guide the oversampling process. Initially, the feature information is employed to partition features into suitable intervals for feature generation. Subsequently, features are generated within these intervals. Finally, the generated features are integrated into the minority class data to achieve effective oversampling. The key advantage of FIAO lies in its ability to fully exploit the intrinsic information carried by the features themselves, thus circumventing issues related to parameter selection and boundary effects. To assess its efficacy, extensive experiments were conducted on 12 widely used benchmark datasets, comparing the performance of the proposed method against 10 popular resampling methods across four commonly used classifiers. The experimental results show that the proposed FIAO method shows ideal results in multiple application scenarios and achieves optimal performance.
引用
收藏
页数:14
相关论文
共 45 条
[1]   To Combat Multi-Class Imbalanced Problems by Means of Over-Sampling Techniques [J].
Abdi, Lida ;
Hashemi, Sattar .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2016, 28 (01) :238-251
[2]  
Alcalá-Fdez J, 2011, J MULT-VALUED LOG S, V17, P255
[3]   MWMOTE-Majority Weighted Minority Oversampling Technique for Imbalanced Data Set Learning [J].
Barua, Sukarna ;
Islam, Md. Monirul ;
Yao, Xin ;
Murase, Kazuyuki .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2014, 26 (02) :405-425
[4]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[5]   SMOTE: Synthetic minority over-sampling technique [J].
Chawla, Nitesh V. ;
Bowyer, Kevin W. ;
Hall, Lawrence O. ;
Kegelmeyer, W. Philip .
2002, American Association for Artificial Intelligence (16)
[6]   Categorical Feature GAN for Imbalanced Intelligent Fault Diagnosis of Rotating Machinery [J].
Dai, Jun ;
Wang, Jun ;
Yao, Linquan ;
Huang, Weiguo ;
Zhu, Zhongkui .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
[7]  
Demsar J, 2006, J MACH LEARN RES, V7, P1
[8]   Imbalanced data classification: A KNN and generative adversarial networks-based hybrid approach for intrusion detection [J].
Ding, Hongwei ;
Chen, Leiyang ;
Dong, Liang ;
Fu, Zhongwang ;
Cui, Xiaohui .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 131 :240-254
[9]   Effective data generation for imbalanced learning using conditional generative adversarial networks [J].
Douzas, Georgios ;
Bacao, Fernando .
EXPERT SYSTEMS WITH APPLICATIONS, 2018, 91 :464-471
[10]   Investigation on the stability of SMOTE-based oversampling techniques in software defect prediction [J].
Feng, Shuo ;
Keung, Jacky ;
Yu, Xiao ;
Xiao, Yan ;
Zhang, Miao .
INFORMATION AND SOFTWARE TECHNOLOGY, 2021, 139