Improving classifier training efficiency for automatic cyberbullying detection with Feature Density

被引:24
作者
Eronen, Juuso [1 ]
Ptaszynski, Michal [1 ]
Masui, Fumito [1 ]
Smywinski-Pohl, Aleksander [2 ]
Leliwa, Gniewosz [3 ]
Wroczynski, Michal [3 ]
机构
[1] Kitami Inst Technol, Kitami, Hokkaido, Japan
[2] AGH Univ Sci & Technol, Krakow, Poland
[3] Samurailabs, Gdynia, Poland
关键词
Feature density; Dataset complexity; Linguistics; Cyberbullying; Document classification; Preprocessing; SYNTACTIC COMPLEXITY; IMPACT; TIMES; SIZE;
D O I
10.1016/j.ipm.2021.102616
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study the effectiveness of Feature Density (FD) using different linguistically-backed feature preprocessing methods in order to estimate dataset complexity, which in turn is used to comparatively estimate the potential performance of machine learning (ML) classifiers prior to any training. We hypothesize that estimating dataset complexity allows for the reduction of the number of required experiments iterations. This way we can optimize the resourceintensive training of ML models which is becoming a serious issue due to the increases in available dataset sizes and the ever rising popularity of models based on Deep Neural Networks (DNN). The problem of constantly increasing needs for more powerful computational resources is also affecting the environment due to alarmingly-growing amount of CO2 emissions caused by training of large-scale ML models. The research was conducted on multiple datasets, including popular datasets, such as Yelp business review dataset used for training typical sentiment analysis models, as well as more recent datasets trying to tackle the problem of cyberbullying, which, being a serious social problem, is also a much more sophisticated problem form the point of view of linguistic representation. We use cyberbullying datasets collected for multiple languages, namely English, Japanese and Polish. The difference in linguistic complexity of datasets allows us to additionally discuss the efficacy of linguistically-backed word preprocessing.
引用
收藏
页数:37
相关论文
共 80 条
[11]   PREDICTING CLASSIFIER PERFORMANCE WITH A SMALL TRAINING SET: APPLICATIONS TO COMPUTER-AIDED DIAGNOSIS AND PROGNOSIS [J].
Basavanhally, Ajay ;
Doyle, Scott ;
Madabhushi, Anant .
2010 7TH IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING: FROM NANO TO MACRO, 2010, :229-232
[12]   Instance Selection for Classifier Performance Estimation in Meta Learning [J].
Blachnik, Marcin .
ENTROPY, 2017, 19 (11)
[13]  
Breiman L., 2001, IEEE Trans. Broadcast., V45, P5
[14]  
Bull Glen, 2010, Learning and Leading with Technology, V38, P28
[15]  
Cano Basave A., 2013, 6 INT JOINT C NAT LA
[16]   Investigating the effect of dataset size, metrics sets, and feature selection techniques on software fault prediction problem [J].
Catal, Cagatay ;
Diri, Banu .
INFORMATION SCIENCES, 2009, 179 (08) :1040-1058
[17]   SMOTE: Synthetic minority over-sampling technique [J].
Chawla, Nitesh V. ;
Bowyer, Kevin W. ;
Hall, Lawrence O. ;
Kegelmeyer, W. Philip .
2002, American Association for Artificial Intelligence (16)
[18]  
Chen T., 2016, abs/1603.02754
[19]  
Chollet F., 2015, Keras
[20]  
Collobert R., 2008, P ICML, P160