Measuring classifier performance: a coherent alternative to the area under the ROC curve

被引:0
作者
David J. Hand
机构
[1] Imperial College London,Department of Mathematics
[2] Imperial College London,Institute for Mathematical Sciences
来源
Machine Learning | 2009年 / 77卷
关键词
ROC curves; Classification; Specificity; Sensitivity; Misclassification rate; Cost; Loss; Error rate;
D O I
暂无
中图分类号
学科分类号
摘要
The area under the ROC curve (AUC) is a very widely used measure of performance for classification and diagnostic rules. It has the appealing property of being objective, requiring no subjective input from the user. On the other hand, the AUC has disadvantages, some of which are well known. For example, the AUC can give potentially misleading results if ROC curves cross. However, the AUC also has a much more serious deficiency, and one which appears not to have been previously recognised. This is that it is fundamentally incoherent in terms of misclassification costs: the AUC uses different misclassification cost distributions for different classifiers. This means that using the AUC is equivalent to using different metrics to evaluate different classification rules. It is equivalent to saying that, using one classifier, misclassifying a class 1 point is p times as serious as misclassifying a class 0 point, but, using another classifier, misclassifying a class 1 point is P times as serious, where p≠P. This is nonsensical because the relative severities of different kinds of misclassifications of individual points is a property of the problem, not the classifiers which happen to have been chosen. This property is explored in detail, and a simple valid alternative to the AUC is proposed.
引用
收藏
页码:103 / 123
页数:20
相关论文
共 50 条
[41]   A Novel Software Metric Selection Technique Using the Area Under ROC Curves [J].
Khoshgoftaar, Taghi M. ;
Gao, Kehan .
22ND INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING & KNOWLEDGE ENGINEERING (SEKE 2010), 2010, :203-208
[42]   Credit scoring optimization using the area under the curve [J].
Kraus, Anne ;
Kuechenhoff, Helmut .
JOURNAL OF RISK MODEL VALIDATION, 2014, 8 (01) :31-67
[43]   Combining binary and continuous biomarkers by maximizing the area under the receiver operating characteristic curve [J].
Ahmadian, Robab ;
Ercan, Ilker ;
Sigirli, Deniz ;
Yildiz, Abdulmecit .
COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2022, 51 (08) :4396-4409
[44]   Direct estimation of the area under the receiver operating characteristic curve with verification biased data [J].
Hai, Yan ;
Qin, Gengsheng .
STATISTICS IN MEDICINE, 2020, 39 (30) :4789-4820
[45]   Resonator measurements: The area under the resonance curve as informative parameter [J].
V. A. Viktorov ;
V. S. Minaev ;
A. S. Sovlukov .
Measurement Techniques, 2006, 49 :503-507
[46]   Influence Analysis for the Area Under the Receiver Operating Characteristic Curve [J].
Ke, Bo-Shiang ;
Chiang, An Jen ;
Chang, Yuan-chin Ivan .
JOURNAL OF BIOPHARMACEUTICAL STATISTICS, 2018, 28 (04) :722-734
[47]   Semiparametric regression for the area under the receiver operating characteristic curve [J].
Dodd, LE ;
Pepe, MS .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2003, 98 (462) :409-417
[48]   Insights into the area under the receiver operating characteristic curve (AUC) as a discrimination measure in species distribution modelling [J].
Jimenez-Valverde, Alberto .
GLOBAL ECOLOGY AND BIOGEOGRAPHY, 2012, 21 (04) :498-507
[49]   Doubly robust estimation of the area under the receiver-operating characteristic curve in the presence of verification bias [J].
Rotnitzky, Andrea ;
Faraggi, David ;
Schisterman, Enrique .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2006, 101 (475) :1276-1288
[50]   Optimal ROC-Based Classification and Performance Analysis under Bayesian Uncertainty Models [J].
Dalton, Lori A. .
IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2016, 13 (04) :719-729