Classifying Crowdsourced Citizen Complaints through Data Mining: Accuracy Testing of k-Nearest Neighbors, Random Forest, Support Vector Machine, and AdaBoost

被引:2
|
作者
Madyatmadja, Evaristus D. [1 ]
Sianipar, Corinthias P. M. [2 ,3 ]
Wijaya, Cristofer [1 ]
Sembiring, David J. M. [4 ]
机构
[1] Bina Nusantara Univ, Informat Syst Dept, Jakarta 11530, Indonesia
[2] Kyoto Univ, Dept Global Ecol, Kyoto 6068501, Japan
[3] Kyoto Univ, Div Environm Sci & Technol, Sakyo Ku, Kyoto 6068502, Japan
[4] Indonesian Inst Technol & Business ITBI, Deli Serdang 20374, Indonesia
来源
INFORMATICS-BASEL | 2023年 / 10卷 / 04期
关键词
public complaint; citizen science; crowdsourcing; sustainable city; machine learning; smart city; knowledge extraction; text mining; large language model; generative AI; E-GOVERNMENT; CLASSIFICATION; SVM; MANAGEMENT; ALGORITHM; CHINA; KNN;
D O I
10.3390/informatics10040084
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Crowdsourcing has gradually become an effective e-government process to gather citizen complaints over the implementation of various public services. In practice, the collected complaints form a massive dataset, making it difficult for government officers to analyze the big data effectively. It is consequently vital to use data mining algorithms to classify the citizen complaint data for efficient follow-up actions. However, different classification algorithms produce varied classification accuracies. Thus, this study aimed to compare the accuracy of several classification algorithms on crowdsourced citizen complaint data. Taking the case of the LAKSA app in Tangerang City, Indonesia, this study included k-Nearest Neighbors, Random Forest, Support Vector Machine, and AdaBoost for the accuracy assessment. The data were taken from crowdsourced citizen complaints submitted to the LAKSA app, including those aggregated from official social media channels, from May 2021 to April 2022. The results showed SVM with a linear kernel as the most accurate among the assessed algorithms (89.2%). In contrast, AdaBoost (base learner: Decision Trees) produced the lowest accuracy. Still, the accuracy levels of all algorithms varied in parallel to the amount of training data available for the actual classification categories. Overall, the assessments on all algorithms indicated that their accuracies were insignificantly different, with an overall variation of 4.3%. The AdaBoost-based classification, in particular, showed its large dependence on the choice of base learners. Looking at the method and results, this study contributes to e-government, data mining, and big data discourses. This research recommends that governments continuously conduct supervised training of classification algorithms over their crowdsourced citizen complaints to seek the highest accuracy possible, paving the way for smart and sustainable governance.
引用
收藏
页数:24
相关论文
共 33 条
  • [21] Pseudo amino acid feature-based protein function prediction using support vector machine and K-nearest neighbors
    Deen A.J.
    Gyanchandani M.
    1600, Science and Information Organization (11): : 187 - 195
  • [22] Pseudo Amino Acid Feature-Based Protein Function Prediction using Support Vector Machine and K-Nearest Neighbors
    Deen, Anjna Jayant
    Gyanchandani, Manasi
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2020, 11 (09) : 187 - 195
  • [23] Comparison of Accuracy Level K-Nearest Neighbor Algorithm and Support Vector Machine Algorithm in Classification Water Quality Status
    Danades, Amri
    Pratama, Devie
    Anggraini, Dian
    Anggriani, Diny
    PROCEEDINGS OF THE 2016 6TH INTERNATIONAL CONFERENCE ON SYSTEM ENGINEERING AND TECHNOLOGY (ICSET), 2016, : 137 - 141
  • [24] Investigation of Statistical Machine Learning Models for COVID-19 Epidemic Process Simulation: Random Forest, K-Nearest Neighbors, Gradient Boosting
    Chumachenko, Dmytro
    Meniailov, Ievgen
    Bazilevych, Kseniia
    Chumachenko, Tetyana
    Yakovlev, Sergey
    COMPUTATION, 2022, 10 (06)
  • [25] Comparison of machine learning methods for stationary wavelet entropy-based multiple sclerosis detection: decision tree, k-nearest neighbors, and support vector machine
    Zhang, Yudong
    Lu, Siyuan
    Zhou, Xingxing
    Yang, Ming
    Wu, Lenan
    Liu, Bin
    Phillips, Preetha
    Wang, Shuihua
    SIMULATION-TRANSACTIONS OF THE SOCIETY FOR MODELING AND SIMULATION INTERNATIONAL, 2016, 92 (09): : 861 - 871
  • [26] Snow Detection using In-Vehicle Video Camera with Texture-Based Image Features Utilizing K-Nearest Neighbor, Support Vector Machine, and Random Forest
    Khan, Md Nasim
    Ahmed, Mohamed M.
    TRANSPORTATION RESEARCH RECORD, 2019, 2673 (08) : 221 - 232
  • [27] COMPARING ACCURACY OF LOGISTIC REGRESSION, K-NEAREST NEIGHBOR, SUPPORT VECTOR MACHINE, AND NAÏVE BAYES MODELS USING TRACKING ENSEMBLE MACHINE LEARNING
    Kuntoro, Kuntoro
    JP JOURNAL OF BIOSTATISTICS, 2024, 24 (01) : 1 - 13
  • [28] IIR Shelving Filter, Support Vector Machine and k-Nearest Neighbors Algorithm Application for Voltage Transients and Short-Duration RMS Variations Analysis
    Liubcuk, Vladislav
    Kairaitis, Gediminas
    Radziukynas, Virginijus
    Naujokaitis, Darius
    INVENTIONS, 2024, 9 (01)
  • [29] Forest Land Resource Information Acquisition with Sentinel-2 Image Utilizing Support Vector Machine, K-Nearest Neighbor, Random Forest, Decision Trees and Multi-Layer Perceptron
    Zhang, Chen
    Liu, Yang
    Tie, Niu
    FORESTS, 2023, 14 (02):
  • [30] A COMPARATIVE STUDY OF FORECASTING CORPORATE CREDIT RATINGS USING ARTIFICIAL NEURAL NETWORKS, SUPPORT VECTOR MACHINE, RANDOM FOREST, THE NAIVE BAYES, DECISION TREE AND K-NEAREST NEIGHBOR
    Al-Sayed, Dalia Adel Abbas
    Awad, Wael Abdel Qader
    Salem, Mohamed Talaat Mohamed
    ADVANCES AND APPLICATIONS IN STATISTICS, 2024, 91 (02) : 125 - 139