When studying the degree of overall agreement between the nominal responses of two raters, it is customary to use the coefficient kappa. A more detailed analysis requires the evaluation of the degree of agreement category by category, and this is carried out in two different ways: using the value of kappa in the collapsed table for each category or using the agreement index for each category (proportion of agreements observed). Both indices have disadvantages: the former is sensitive to marginal totals; the latter is not chance corrected; and neither distinguishes the case where one of the two raters is a gold standard (an expert) from the case where neither rater is a gold standard. This article suggests five chance-corrected indices which are not sensitive to marginal totals and which differ depending on whether there is a standard rater. The article also justifies the reason for poor performance of kappa when the two marginal totals are unbalanced (especially if they are so in opposite directions) and the reason for its good performance when analysing the various 2 x 2 tables obtained by the collapse of a wider table.