Explaining the unsuitability of the kappa coefficient in the assessment and comparison of the accuracy of thematic maps obtained by image classification

被引:405
作者
Foody, Giles M. [1 ]
机构
[1] Univ Nottingham, Sch Geog, Nottingham NG7 2RD, England
关键词
Accuracy; Kappa coefficient; Chance; Prevalence; Bias; STANDARD ERRORS; HIGH AGREEMENT; PREVALENCE; MODELS; RELIABILITY; INDEX; AREA; BIAS;
D O I
10.1016/j.rse.2019.111630
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
The kappa coefficient is not an index of accuracy, indeed it is not an index of overall agreement but one of agreement beyond chance. Chance agreement is, however, irrelevant in an accuracy assessment and is anyway inappropriately modelled in the calculation of a kappa coefficient for typical remote sensing applications. The magnitude of a kappa coefficient is also difficult to interpret. Values that span the full range of widely used interpretation scales, indicating a level of agreement that equates to that estimated to arise from chance alone all the way through to almost perfect agreement, can be obtained from classifications that satisfy demanding accuracy targets (e.g. for a classification with overall accuracy of 95% the range of possible values of the kappa coefficient is -0.026 to 0.900). Comparisons of kappa coefficients are particularly challenging if the classes vary in their abundance (i.e. prevalence) as the magnitude of a kappa coefficient reflects not only agreement in labelling but also properties of the populations under study. It is shown that all of the arguments put forward for the use of the kappa coefficient in accuracy assessment are flawed and/or irrelevant as they apply equally to other, sometimes easier to calculate, measures of accuracy. Calls for the kappa coefficient to be abandoned from accuracy assessments should finally be heeded and researchers are encouraged to provide a set of simple measures and associated outputs such as estimates of per-class accuracy and the confusion matrix when assessing and comparing classification accuracy.
引用
收藏
页数:11
相关论文
共 66 条
[21]   ASSESSING THE CLASSIFICATION ACCURACY OF MULTISOURCE REMOTE-SENSING DATA [J].
FITZGERALD, RW ;
LEES, BG .
REMOTE SENSING OF ENVIRONMENT, 1994, 47 (03) :362-368
[22]  
Fleiss J. L., 2013, Statistical methods for rates and proportions
[23]   LARGE SAMPLE STANDARD ERRORS OF KAPPA AND WEIGHTED KAPPA [J].
FLEISS, JL ;
COHEN, J ;
EVERITT, BS .
PSYCHOLOGICAL BULLETIN, 1969, 72 (05) :323-&
[24]   Harshness in image classification accuracy assessment [J].
Foody, Giles M. .
INTERNATIONAL JOURNAL OF REMOTE SENSING, 2008, 29 (11) :3137-3158
[25]   Latent Class Modeling for Site- and Non-Site-Specific Classification Accuracy Assessment Without Ground Data [J].
Foody, Giles M. .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2012, 50 (07) :2827-2838
[26]   Assessing the accuracy of land cover change with imperfect ground reference data [J].
Foody, Giles M. .
REMOTE SENSING OF ENVIRONMENT, 2010, 114 (10) :2271-2285
[27]   Classification accuracy comparison: Hypothesis tests and the use of confidence intervals in evaluations of difference, equivalence and non-inferiority [J].
Foody, Giles M. .
REMOTE SENSING OF ENVIRONMENT, 2009, 113 (08) :1658-1663
[28]  
FOODY GM, 1992, PHOTOGRAMM ENG REM S, V58, P1459
[29]   Thematic map comparison: Evaluating the statistical significance of differences in classification accuracy [J].
Foody, GM .
PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, 2004, 70 (05) :627-633
[30]   Assessing the Performance of Classification Methods [J].
Hand, David J. .
INTERNATIONAL STATISTICAL REVIEW, 2012, 80 (03) :400-414