A New Performance Evaluation Metric for Classifiers: Polygon Area Metric

被引:29
作者
Aydemir, Onder [1 ]
机构
[1] Karadeniz Tech Univ, Fac Engn, Dept Elect & Elect Engn, TR-61080 Trabzon, Turkey
关键词
Classifier performance; Classification accuracy; Assessment metric; Polygon area metric;
D O I
10.1007/s00357-020-09362-5
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Classifier performance assessment (CPA) is a challenging task for pattern recognition. In recent years, various CPA metrics have been developed to help assess the performance of classifiers. Although the classification accuracy (CA), which is the most popular metric in pattern recognition area, works well if the classes have equal number of samples, it fails to evaluate the recognition performance of each class when the classes have different number of samples. To overcome this problem, researchers have developed various metrics including sensitivity, specificity, area under curve, Jaccard index, Kappa, and F-measure except CA. Giving many evaluation metrics for assessing the performance of classifiers make large tables possible. Additionally, when comparing classifiers with each other, while a classifier might be more successful on a metric, it may have poor performance for the other metrics. Hence, such kinds of situations make it difficult to track results and compare classifiers. This study proposes a stable and profound knowledge criterion that allows the performance of a classifier to be evaluated with only a single metric called as polygon area metric (PAM). Thus, classifier performance can be easily evaluated without the need for several metrics. The stability and validity of the proposed metric were tested with the k-nearest neighbor, support vector machines, and linear discriminant analysis classifiers on a total of 7 different datasets, five of which were artificial. The results indicate that the proposed PAM method is simple but effective for evaluating classifier performance.
引用
收藏
页码:16 / 26
页数:11
相关论文
共 11 条
  • [1] Aydemir O, 2013, INT J INNOV COMPUT I, V9, P1145
  • [2] Aydemir O, 2011, RADIOENGINEERING, V20, P31
  • [3] Kernel regression for fMRI pattern prediction
    Chu, Carlton
    Ni, Yizhao
    Tan, Geoffrey
    Saunders, Craig J.
    Ashburner, John
    [J]. NEUROIMAGE, 2011, 56 (02) : 662 - 673
  • [4] Comparison of performance of five common classifiers represented as boundary methods: Euclidean Distance to Centroids, Linear Discriminant Analysis, Quadratic Discriminant Analysis, Learning Vector Quantization and Support Vector Machines, as dependent on data structure
    Dixon, Sarah J.
    Brereton, Richard G.
    [J]. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2009, 95 (01) : 1 - 17
  • [5] An introduction to ROC analysis
    Fawcett, Tom
    [J]. PATTERN RECOGNITION LETTERS, 2006, 27 (08) : 861 - 874
  • [6] Deterministic assembly scheduling problems: A review and classification of concurrent-type scheduling models and solution procedures
    Framinan, Jose M.
    Perez-Gonzalez, Paz
    Fernandez-Viagas, Victor
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2019, 273 (02) : 401 - 417
  • [7] Development of machine learning models for diagnosis of glaucoma
    Kim, Seong Jae
    Cho, Kyong Jin
    Oh, Sejong
    [J]. PLOS ONE, 2017, 12 (05):
  • [8] Lal T. N., ADV NEURAL INFORM PR, V17, P737
  • [9] Confusion-Matrix-Based Kernel Logistic Regression for Imbalanced Data Classification
    Ohsaki, Miho
    Wang, Peng
    Matsuda, Kenji
    Katagiri, Shigeru
    Watanabe, Hideyuki
    Ralescu, Anca
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2017, 29 (09) : 1806 - 1819
  • [10] Performances of machine learning algorithms for mapping fractional cover of an invasive plant species in a dryland ecosystem
    Shiferaw, Hailu
    Bewket, Woldeamlak
    Eckert, Sandra
    [J]. ECOLOGY AND EVOLUTION, 2019, 9 (05): : 2562 - 2574