Random marginal agreement coefficients: rethinking the adjustment for chance when measuring agreement

被引:28
作者
Fay, MP [1 ]
机构
[1] Natl Inst Allergy & Infect Dis, Bethesda, MD 20892 USA
关键词
concordance correlation coefficient; kappa; random maraginal agreement coefficient; reliability; weighted kappa;
D O I
10.1093/biostatistics/kxh027
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Agreement coefficients quantify how well a set of instruments agree in measuring some response on a population of interest. Many standard agreement coefficients (e.g. kappa for nominal, weighted kappa for ordinal, and the concordance correlation coefficient (CCC) for continuous responses) may indicate increasing agreement as the marginal distributions of the two instruments become more different even as the true cost of disagreement stays the same or increases. This problem has been described for the kappa coefficients; here we describe it for the CCC. We propose a solution for all types of responses in the form of random marginal agreement coefficients (RMACs), which use a different adjustment for chance than the standard agreement coefficients. Standard agreement coefficients model chance agreement using expected agreement between two independent random variables each distributed according to the marginal distribution of one of the instruments. RMACs adjust for chance by modeling two independent readings both from the mixture distribution that averages the two marginal distributions. In other words. both independent readings represent first a random choice of instrument, then a random draw from the marginal distribution of the chosen instrument. The advantage of the resulting RMAC is that differences between the two marginal distributions will not induce greater apparent agreement. As with the standard agreement coefficients, the RMACs do not require any assumptions about the bivariate distribution of the random variables associated with the two instruments. We describe the RMAC for nominal. ordinal and continuous data, and show through the delta method how to approximate the variances of sonic important special cases.
引用
收藏
页码:171 / 180
页数:10
相关论文
共 19 条
[1]   A MODEL FOR AGREEMENT BETWEEN RATINGS ON AN ORDINAL SCALE [J].
AGRESTI, A .
BIOMETRICS, 1988, 44 (02) :539-548
[2]  
Atkinson G, 1997, BIOMETRICS, V53, P775
[3]   Beyond kappa: A review of interrater agreement measures [J].
Banerjee, M .
CANADIAN JOURNAL OF STATISTICS-REVUE CANADIENNE DE STATISTIQUE, 1999, 27 (01) :3-23
[4]   2X2 KAPPA-COEFFICIENTS - MEASURES OF AGREEMENT OR ASSOCIATION [J].
BLOCH, DA ;
KRAEMER, HC .
BIOMETRICS, 1989, 45 (01) :269-287
[5]   BIAS, PREVALENCE AND KAPPA [J].
BYRT, T ;
BISHOP, J ;
CARLIN, JB .
JOURNAL OF CLINICAL EPIDEMIOLOGY, 1993, 46 (05) :423-429
[6]   Estimating the generalized concordance correlation coefficient through variance components [J].
Carrasco, JL ;
Jover, L .
BIOMETRICS, 2003, 59 (04) :849-858
[7]  
Efron B., 1994, INTRO BOOTSTRAP, DOI DOI 10.1201/9780429246593
[8]   MEASURING AGREEMENT BETWEEN 2 JUDGES ON PRESENCE OR ABSENCE OF A TRAIT [J].
FLEISS, JL .
BIOMETRICS, 1975, 31 (03) :651-659
[9]  
Fleiss JL, 2013, STAT METHODS RATES P
[10]   A generalized concordance correlation coefficient for continuous and categorical data [J].
King, TS ;
Chinchilli, VM .
STATISTICS IN MEDICINE, 2001, 20 (14) :2131-2147